=== RUN TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run: kubectl --context functional-563786 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:848: etcd is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:True} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.101 PodIP:192.168.39.101 StartTime:2025-12-29 06:57:35 +0000 UTC ContainerStatuses:[{Name:etcd State:{Waiting:<nil> Running:0xc0020596c8 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc0020be230} Ready:false RestartCount:2 Image:registry.k8s.io/etcd:3.6.6-0 ImageID:registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890 ContainerID:containerd://302099eedbb6f0dc4e582744ba0b29ddd304d3575c870c40f52c064c2829a751}]}
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestFunctional/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-563786 -n functional-563786
helpers_test.go:253: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-amd64 -p functional-563786 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-563786 logs -n 25: (1.236977089s)
helpers_test.go:261: TestFunctional/serial/ComponentHealth logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ unpause │ nospam-249838 --log_dir /tmp/nospam-249838 unpause │ nospam-249838 │ jenkins │ v1.37.0 │ 29 Dec 25 06:54 UTC │ 29 Dec 25 06:54 UTC │
│ unpause │ nospam-249838 --log_dir /tmp/nospam-249838 unpause │ nospam-249838 │ jenkins │ v1.37.0 │ 29 Dec 25 06:54 UTC │ 29 Dec 25 06:55 UTC │
│ unpause │ nospam-249838 --log_dir /tmp/nospam-249838 unpause │ nospam-249838 │ jenkins │ v1.37.0 │ 29 Dec 25 06:55 UTC │ 29 Dec 25 06:55 UTC │
│ stop │ nospam-249838 --log_dir /tmp/nospam-249838 stop │ nospam-249838 │ jenkins │ v1.37.0 │ 29 Dec 25 06:55 UTC │ 29 Dec 25 06:55 UTC │
│ stop │ nospam-249838 --log_dir /tmp/nospam-249838 stop │ nospam-249838 │ jenkins │ v1.37.0 │ 29 Dec 25 06:55 UTC │ 29 Dec 25 06:55 UTC │
│ stop │ nospam-249838 --log_dir /tmp/nospam-249838 stop │ nospam-249838 │ jenkins │ v1.37.0 │ 29 Dec 25 06:55 UTC │ 29 Dec 25 06:55 UTC │
│ delete │ -p nospam-249838 │ nospam-249838 │ jenkins │ v1.37.0 │ 29 Dec 25 06:55 UTC │ 29 Dec 25 06:55 UTC │
│ start │ -p functional-563786 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2 --container-runtime=containerd │ functional-563786 │ jenkins │ v1.37.0 │ 29 Dec 25 06:55 UTC │ 29 Dec 25 06:56 UTC │
│ start │ -p functional-563786 --alsologtostderr -v=8 │ functional-563786 │ jenkins │ v1.37.0 │ 29 Dec 25 06:56 UTC │ 29 Dec 25 06:57 UTC │
│ cache │ functional-563786 cache add registry.k8s.io/pause:3.1 │ functional-563786 │ jenkins │ v1.37.0 │ 29 Dec 25 06:57 UTC │ 29 Dec 25 06:57 UTC │
│ cache │ functional-563786 cache add registry.k8s.io/pause:3.3 │ functional-563786 │ jenkins │ v1.37.0 │ 29 Dec 25 06:57 UTC │ 29 Dec 25 06:57 UTC │
│ cache │ functional-563786 cache add registry.k8s.io/pause:latest │ functional-563786 │ jenkins │ v1.37.0 │ 29 Dec 25 06:57 UTC │ 29 Dec 25 06:57 UTC │
│ cache │ functional-563786 cache add minikube-local-cache-test:functional-563786 │ functional-563786 │ jenkins │ v1.37.0 │ 29 Dec 25 06:57 UTC │ 29 Dec 25 06:57 UTC │
│ cache │ functional-563786 cache delete minikube-local-cache-test:functional-563786 │ functional-563786 │ jenkins │ v1.37.0 │ 29 Dec 25 06:57 UTC │ 29 Dec 25 06:57 UTC │
│ cache │ delete registry.k8s.io/pause:3.3 │ minikube │ jenkins │ v1.37.0 │ 29 Dec 25 06:57 UTC │ 29 Dec 25 06:57 UTC │
│ cache │ list │ minikube │ jenkins │ v1.37.0 │ 29 Dec 25 06:57 UTC │ 29 Dec 25 06:57 UTC │
│ ssh │ functional-563786 ssh sudo crictl images │ functional-563786 │ jenkins │ v1.37.0 │ 29 Dec 25 06:57 UTC │ 29 Dec 25 06:57 UTC │
│ ssh │ functional-563786 ssh sudo crictl rmi registry.k8s.io/pause:latest │ functional-563786 │ jenkins │ v1.37.0 │ 29 Dec 25 06:57 UTC │ 29 Dec 25 06:57 UTC │
│ ssh │ functional-563786 ssh sudo crictl inspecti registry.k8s.io/pause:latest │ functional-563786 │ jenkins │ v1.37.0 │ 29 Dec 25 06:57 UTC │ │
│ cache │ functional-563786 cache reload │ functional-563786 │ jenkins │ v1.37.0 │ 29 Dec 25 06:57 UTC │ 29 Dec 25 06:57 UTC │
│ ssh │ functional-563786 ssh sudo crictl inspecti registry.k8s.io/pause:latest │ functional-563786 │ jenkins │ v1.37.0 │ 29 Dec 25 06:57 UTC │ 29 Dec 25 06:57 UTC │
│ cache │ delete registry.k8s.io/pause:3.1 │ minikube │ jenkins │ v1.37.0 │ 29 Dec 25 06:57 UTC │ 29 Dec 25 06:57 UTC │
│ cache │ delete registry.k8s.io/pause:latest │ minikube │ jenkins │ v1.37.0 │ 29 Dec 25 06:57 UTC │ 29 Dec 25 06:57 UTC │
│ kubectl │ functional-563786 kubectl -- --context functional-563786 get pods │ functional-563786 │ jenkins │ v1.37.0 │ 29 Dec 25 06:57 UTC │ 29 Dec 25 06:57 UTC │
│ start │ -p functional-563786 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all │ functional-563786 │ jenkins │ v1.37.0 │ 29 Dec 25 06:57 UTC │ 29 Dec 25 06:57 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/29 06:57:13
Running on machine: ubuntu-20-agent-5
Binary: Built with gc go1.25.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1229 06:57:13.802353 18214 out.go:360] Setting OutFile to fd 1 ...
I1229 06:57:13.802575 18214 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:57:13.802577 18214 out.go:374] Setting ErrFile to fd 2...
I1229 06:57:13.802580 18214 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:57:13.802785 18214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-9107/.minikube/bin
I1229 06:57:13.803175 18214 out.go:368] Setting JSON to false
I1229 06:57:13.803987 18214 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2375,"bootTime":1766989059,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1229 06:57:13.804032 18214 start.go:143] virtualization: kvm guest
I1229 06:57:13.806231 18214 out.go:179] * [functional-563786] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1229 06:57:13.807525 18214 notify.go:221] Checking for updates...
I1229 06:57:13.807564 18214 out.go:179] - MINIKUBE_LOCATION=22353
I1229 06:57:13.808814 18214 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1229 06:57:13.810101 18214 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22353-9107/kubeconfig
I1229 06:57:13.811256 18214 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-9107/.minikube
I1229 06:57:13.812338 18214 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1229 06:57:13.813383 18214 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1229 06:57:13.814750 18214 config.go:182] Loaded profile config "functional-563786": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1229 06:57:13.814825 18214 driver.go:422] Setting default libvirt URI to qemu:///system
I1229 06:57:13.843606 18214 out.go:179] * Using the kvm2 driver based on existing profile
I1229 06:57:13.844581 18214 start.go:309] selected driver: kvm2
I1229 06:57:13.844587 18214 start.go:928] validating driver "kvm2" against &{Name:functional-563786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0 ClusterName:functional-563786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.101 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1229 06:57:13.844674 18214 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1229 06:57:13.845853 18214 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1229 06:57:13.845875 18214 cni.go:84] Creating CNI manager for ""
I1229 06:57:13.845925 18214 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I1229 06:57:13.845963 18214 start.go:353] cluster config:
{Name:functional-563786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-563786 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.101 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1229 06:57:13.846052 18214 iso.go:125] acquiring lock: {Name:mkbbcc01ea6e9108e8e2cdf4095c79ee51a414a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1229 06:57:13.847752 18214 out.go:179] * Starting "functional-563786" primary control-plane node in "functional-563786" cluster
I1229 06:57:13.848565 18214 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1229 06:57:13.848607 18214 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-9107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-amd64.tar.lz4
I1229 06:57:13.848612 18214 cache.go:65] Caching tarball of preloaded images
I1229 06:57:13.848672 18214 preload.go:251] Found /home/jenkins/minikube-integration/22353-9107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I1229 06:57:13.848678 18214 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
I1229 06:57:13.848756 18214 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-9107/.minikube/profiles/functional-563786/config.json ...
I1229 06:57:13.849000 18214 start.go:360] acquireMachinesLock for functional-563786: {Name:mk482cacf3f35b1e5935f1af5857e770a5cd8714 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1229 06:57:13.849049 18214 start.go:364] duration metric: took 34.652µs to acquireMachinesLock for "functional-563786"
I1229 06:57:13.849067 18214 start.go:96] Skipping create...Using existing machine configuration
I1229 06:57:13.849071 18214 fix.go:54] fixHost starting:
I1229 06:57:13.850932 18214 fix.go:112] recreateIfNeeded on functional-563786: state=Running err=<nil>
W1229 06:57:13.850942 18214 fix.go:138] unexpected machine state, will restart: <nil>
I1229 06:57:13.852217 18214 out.go:252] * Updating the running kvm2 "functional-563786" VM ...
I1229 06:57:13.852230 18214 machine.go:94] provisionDockerMachine start ...
I1229 06:57:13.854212 18214 main.go:144] libmachine: domain functional-563786 has defined MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:13.854557 18214 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7f:43:99", ip: ""} in network mk-functional-563786: {Iface:virbr1 ExpiryTime:2025-12-29 07:55:20 +0000 UTC Type:0 Mac:52:54:00:7f:43:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:functional-563786 Clientid:01:52:54:00:7f:43:99}
I1229 06:57:13.854579 18214 main.go:144] libmachine: domain functional-563786 has defined IP address 192.168.39.101 and MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:13.854710 18214 main.go:144] libmachine: Using SSH client type: native
I1229 06:57:13.854879 18214 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.101 22 <nil> <nil>}
I1229 06:57:13.854883 18214 main.go:144] libmachine: About to run SSH command:
hostname
I1229 06:57:13.967472 18214 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-563786
I1229 06:57:13.967487 18214 buildroot.go:166] provisioning hostname "functional-563786"
I1229 06:57:13.970476 18214 main.go:144] libmachine: domain functional-563786 has defined MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:13.970843 18214 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7f:43:99", ip: ""} in network mk-functional-563786: {Iface:virbr1 ExpiryTime:2025-12-29 07:55:20 +0000 UTC Type:0 Mac:52:54:00:7f:43:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:functional-563786 Clientid:01:52:54:00:7f:43:99}
I1229 06:57:13.970866 18214 main.go:144] libmachine: domain functional-563786 has defined IP address 192.168.39.101 and MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:13.971058 18214 main.go:144] libmachine: Using SSH client type: native
I1229 06:57:13.971329 18214 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.101 22 <nil> <nil>}
I1229 06:57:13.971339 18214 main.go:144] libmachine: About to run SSH command:
sudo hostname functional-563786 && echo "functional-563786" | sudo tee /etc/hostname
I1229 06:57:14.101950 18214 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-563786
I1229 06:57:14.104829 18214 main.go:144] libmachine: domain functional-563786 has defined MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:14.105139 18214 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7f:43:99", ip: ""} in network mk-functional-563786: {Iface:virbr1 ExpiryTime:2025-12-29 07:55:20 +0000 UTC Type:0 Mac:52:54:00:7f:43:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:functional-563786 Clientid:01:52:54:00:7f:43:99}
I1229 06:57:14.105154 18214 main.go:144] libmachine: domain functional-563786 has defined IP address 192.168.39.101 and MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:14.105310 18214 main.go:144] libmachine: Using SSH client type: native
I1229 06:57:14.105488 18214 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.101 22 <nil> <nil>}
I1229 06:57:14.105497 18214 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-563786' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-563786/g' /etc/hosts;
else
echo '127.0.1.1 functional-563786' | sudo tee -a /etc/hosts;
fi
fi
I1229 06:57:14.218074 18214 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1229 06:57:14.218087 18214 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22353-9107/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-9107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-9107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-9107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-9107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-9107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-9107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-9107/.minikube}
I1229 06:57:14.218114 18214 buildroot.go:174] setting up certificates
I1229 06:57:14.218127 18214 provision.go:84] configureAuth start
I1229 06:57:14.221030 18214 main.go:144] libmachine: domain functional-563786 has defined MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:14.221446 18214 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7f:43:99", ip: ""} in network mk-functional-563786: {Iface:virbr1 ExpiryTime:2025-12-29 07:55:20 +0000 UTC Type:0 Mac:52:54:00:7f:43:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:functional-563786 Clientid:01:52:54:00:7f:43:99}
I1229 06:57:14.221467 18214 main.go:144] libmachine: domain functional-563786 has defined IP address 192.168.39.101 and MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:14.223773 18214 main.go:144] libmachine: domain functional-563786 has defined MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:14.224082 18214 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7f:43:99", ip: ""} in network mk-functional-563786: {Iface:virbr1 ExpiryTime:2025-12-29 07:55:20 +0000 UTC Type:0 Mac:52:54:00:7f:43:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:functional-563786 Clientid:01:52:54:00:7f:43:99}
I1229 06:57:14.224097 18214 main.go:144] libmachine: domain functional-563786 has defined IP address 192.168.39.101 and MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:14.224246 18214 provision.go:143] copyHostCerts
I1229 06:57:14.224289 18214 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9107/.minikube/ca.pem, removing ...
I1229 06:57:14.224299 18214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9107/.minikube/ca.pem
I1229 06:57:14.224370 18214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-9107/.minikube/ca.pem (1078 bytes)
I1229 06:57:14.224472 18214 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9107/.minikube/cert.pem, removing ...
I1229 06:57:14.224476 18214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9107/.minikube/cert.pem
I1229 06:57:14.224502 18214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-9107/.minikube/cert.pem (1123 bytes)
I1229 06:57:14.224562 18214 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-9107/.minikube/key.pem, removing ...
I1229 06:57:14.224565 18214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-9107/.minikube/key.pem
I1229 06:57:14.224586 18214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-9107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-9107/.minikube/key.pem (1675 bytes)
I1229 06:57:14.224641 18214 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-9107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-9107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-9107/.minikube/certs/ca-key.pem org=jenkins.functional-563786 san=[127.0.0.1 192.168.39.101 functional-563786 localhost minikube]
I1229 06:57:14.318961 18214 provision.go:177] copyRemoteCerts
I1229 06:57:14.319019 18214 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1229 06:57:14.321490 18214 main.go:144] libmachine: domain functional-563786 has defined MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:14.321811 18214 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7f:43:99", ip: ""} in network mk-functional-563786: {Iface:virbr1 ExpiryTime:2025-12-29 07:55:20 +0000 UTC Type:0 Mac:52:54:00:7f:43:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:functional-563786 Clientid:01:52:54:00:7f:43:99}
I1229 06:57:14.321825 18214 main.go:144] libmachine: domain functional-563786 has defined IP address 192.168.39.101 and MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:14.321941 18214 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9107/.minikube/machines/functional-563786/id_rsa Username:docker}
I1229 06:57:14.410752 18214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1229 06:57:14.439400 18214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9107/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1229 06:57:14.467763 18214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1229 06:57:14.499845 18214 provision.go:87] duration metric: took 281.699339ms to configureAuth
I1229 06:57:14.499860 18214 buildroot.go:189] setting minikube options for container-runtime
I1229 06:57:14.500017 18214 config.go:182] Loaded profile config "functional-563786": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1229 06:57:14.500023 18214 machine.go:97] duration metric: took 647.78912ms to provisionDockerMachine
I1229 06:57:14.500029 18214 start.go:293] postStartSetup for "functional-563786" (driver="kvm2")
I1229 06:57:14.500045 18214 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1229 06:57:14.500082 18214 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1229 06:57:14.502969 18214 main.go:144] libmachine: domain functional-563786 has defined MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:14.503301 18214 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7f:43:99", ip: ""} in network mk-functional-563786: {Iface:virbr1 ExpiryTime:2025-12-29 07:55:20 +0000 UTC Type:0 Mac:52:54:00:7f:43:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:functional-563786 Clientid:01:52:54:00:7f:43:99}
I1229 06:57:14.503323 18214 main.go:144] libmachine: domain functional-563786 has defined IP address 192.168.39.101 and MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:14.503450 18214 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9107/.minikube/machines/functional-563786/id_rsa Username:docker}
I1229 06:57:14.601052 18214 ssh_runner.go:195] Run: cat /etc/os-release
I1229 06:57:14.607277 18214 info.go:137] Remote host: Buildroot 2025.02
I1229 06:57:14.607288 18214 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9107/.minikube/addons for local assets ...
I1229 06:57:14.607344 18214 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-9107/.minikube/files for local assets ...
I1229 06:57:14.607411 18214 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9107/.minikube/files/etc/ssl/certs/130812.pem -> 130812.pem in /etc/ssl/certs
I1229 06:57:14.607474 18214 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-9107/.minikube/files/etc/test/nested/copy/13081/hosts -> hosts in /etc/test/nested/copy/13081
I1229 06:57:14.607507 18214 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13081
I1229 06:57:14.619917 18214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9107/.minikube/files/etc/ssl/certs/130812.pem --> /etc/ssl/certs/130812.pem (1708 bytes)
I1229 06:57:14.649655 18214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9107/.minikube/files/etc/test/nested/copy/13081/hosts --> /etc/test/nested/copy/13081/hosts (40 bytes)
I1229 06:57:14.679167 18214 start.go:296] duration metric: took 179.126095ms for postStartSetup
I1229 06:57:14.679203 18214 fix.go:56] duration metric: took 830.132643ms for fixHost
I1229 06:57:14.681828 18214 main.go:144] libmachine: domain functional-563786 has defined MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:14.682127 18214 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7f:43:99", ip: ""} in network mk-functional-563786: {Iface:virbr1 ExpiryTime:2025-12-29 07:55:20 +0000 UTC Type:0 Mac:52:54:00:7f:43:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:functional-563786 Clientid:01:52:54:00:7f:43:99}
I1229 06:57:14.682138 18214 main.go:144] libmachine: domain functional-563786 has defined IP address 192.168.39.101 and MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:14.682290 18214 main.go:144] libmachine: Using SSH client type: native
I1229 06:57:14.682477 18214 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e300] 0x850fa0 <nil> [] 0s} 192.168.39.101 22 <nil> <nil>}
I1229 06:57:14.682481 18214 main.go:144] libmachine: About to run SSH command:
date +%s.%N
I1229 06:57:14.795895 18214 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766991434.790395962
I1229 06:57:14.795904 18214 fix.go:216] guest clock: 1766991434.790395962
I1229 06:57:14.795910 18214 fix.go:229] Guest: 2025-12-29 06:57:14.790395962 +0000 UTC Remote: 2025-12-29 06:57:14.679205688 +0000 UTC m=+0.920972888 (delta=111.190274ms)
I1229 06:57:14.795921 18214 fix.go:200] guest clock delta is within tolerance: 111.190274ms
I1229 06:57:14.795925 18214 start.go:83] releasing machines lock for "functional-563786", held for 946.870371ms
I1229 06:57:14.798842 18214 main.go:144] libmachine: domain functional-563786 has defined MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:14.799260 18214 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7f:43:99", ip: ""} in network mk-functional-563786: {Iface:virbr1 ExpiryTime:2025-12-29 07:55:20 +0000 UTC Type:0 Mac:52:54:00:7f:43:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:functional-563786 Clientid:01:52:54:00:7f:43:99}
I1229 06:57:14.799293 18214 main.go:144] libmachine: domain functional-563786 has defined IP address 192.168.39.101 and MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:14.799941 18214 ssh_runner.go:195] Run: cat /version.json
I1229 06:57:14.800030 18214 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1229 06:57:14.803486 18214 main.go:144] libmachine: domain functional-563786 has defined MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:14.803534 18214 main.go:144] libmachine: domain functional-563786 has defined MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:14.803888 18214 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7f:43:99", ip: ""} in network mk-functional-563786: {Iface:virbr1 ExpiryTime:2025-12-29 07:55:20 +0000 UTC Type:0 Mac:52:54:00:7f:43:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:functional-563786 Clientid:01:52:54:00:7f:43:99}
I1229 06:57:14.803909 18214 main.go:144] libmachine: domain functional-563786 has defined IP address 192.168.39.101 and MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:14.803913 18214 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7f:43:99", ip: ""} in network mk-functional-563786: {Iface:virbr1 ExpiryTime:2025-12-29 07:55:20 +0000 UTC Type:0 Mac:52:54:00:7f:43:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:functional-563786 Clientid:01:52:54:00:7f:43:99}
I1229 06:57:14.803931 18214 main.go:144] libmachine: domain functional-563786 has defined IP address 192.168.39.101 and MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:14.804088 18214 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9107/.minikube/machines/functional-563786/id_rsa Username:docker}
I1229 06:57:14.804092 18214 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9107/.minikube/machines/functional-563786/id_rsa Username:docker}
I1229 06:57:14.887476 18214 ssh_runner.go:195] Run: systemctl --version
I1229 06:57:14.915071 18214 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1229 06:57:14.921111 18214 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1229 06:57:14.921161 18214 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1229 06:57:14.931759 18214 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1229 06:57:14.931768 18214 start.go:496] detecting cgroup driver to use...
I1229 06:57:14.931787 18214 start.go:519] Kubernetes 1.35.0+ detected, using "systemd" cgroup driver
I1229 06:57:14.931830 18214 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1229 06:57:14.949191 18214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1229 06:57:14.964373 18214 docker.go:218] disabling cri-docker service (if available) ...
I1229 06:57:14.964418 18214 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1229 06:57:14.983770 18214 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1229 06:57:14.998727 18214 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1229 06:57:15.200888 18214 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1229 06:57:15.391829 18214 docker.go:234] disabling docker service ...
I1229 06:57:15.391889 18214 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1229 06:57:15.419109 18214 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1229 06:57:15.433785 18214 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1229 06:57:15.650731 18214 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1229 06:57:15.849475 18214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1229 06:57:15.867846 18214 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1229 06:57:15.894372 18214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1229 06:57:15.906935 18214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1229 06:57:15.918614 18214 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
I1229 06:57:15.918664 18214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1229 06:57:15.930261 18214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1229 06:57:15.941770 18214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1229 06:57:15.955149 18214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1229 06:57:15.967116 18214 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1229 06:57:15.979285 18214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1229 06:57:15.991451 18214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1229 06:57:16.002962 18214 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1229 06:57:16.015211 18214 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1229 06:57:16.026709 18214 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1229 06:57:16.038779 18214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1229 06:57:16.227511 18214 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1229 06:57:16.268857 18214 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
I1229 06:57:16.268927 18214 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1229 06:57:16.275433 18214 retry.go:84] will retry after 1.4s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I1229 06:57:17.641971 18214 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1229 06:57:17.649031 18214 start.go:574] Will wait 60s for crictl version
I1229 06:57:17.649074 18214 ssh_runner.go:195] Run: which crictl
I1229 06:57:17.655242 18214 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1229 06:57:17.694810 18214 start.go:590] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.1
RuntimeApiVersion: v1
I1229 06:57:17.694876 18214 ssh_runner.go:195] Run: containerd --version
I1229 06:57:17.717790 18214 ssh_runner.go:195] Run: containerd --version
I1229 06:57:17.740604 18214 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
I1229 06:57:17.744500 18214 main.go:144] libmachine: domain functional-563786 has defined MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:17.744874 18214 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7f:43:99", ip: ""} in network mk-functional-563786: {Iface:virbr1 ExpiryTime:2025-12-29 07:55:20 +0000 UTC Type:0 Mac:52:54:00:7f:43:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:functional-563786 Clientid:01:52:54:00:7f:43:99}
I1229 06:57:17.744892 18214 main.go:144] libmachine: domain functional-563786 has defined IP address 192.168.39.101 and MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:17.745089 18214 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1229 06:57:17.751256 18214 out.go:179] - apiserver.enable-admission-plugins=NamespaceAutoProvision
I1229 06:57:17.752169 18214 kubeadm.go:884] updating cluster {Name:functional-563786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0 ClusterName:functional-563786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.101 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount
String: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
I1229 06:57:17.752272 18214 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1229 06:57:17.752319 18214 ssh_runner.go:195] Run: sudo crictl images --output json
I1229 06:57:17.785339 18214 containerd.go:635] all images are preloaded for containerd runtime.
I1229 06:57:17.785351 18214 containerd.go:542] Images already preloaded, skipping extraction
I1229 06:57:17.785413 18214 ssh_runner.go:195] Run: sudo crictl images --output json
I1229 06:57:17.815348 18214 containerd.go:635] all images are preloaded for containerd runtime.
I1229 06:57:17.815359 18214 cache_images.go:86] Images are preloaded, skipping loading
I1229 06:57:17.815366 18214 kubeadm.go:935] updating node { 192.168.39.101 8441 v1.35.0 containerd true true} ...
I1229 06:57:17.815487 18214 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-563786 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.101
[Install]
config:
{KubernetesVersion:v1.35.0 ClusterName:functional-563786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1229 06:57:17.815549 18214 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
I1229 06:57:17.856531 18214 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
I1229 06:57:17.856546 18214 cni.go:84] Creating CNI manager for ""
I1229 06:57:17.856555 18214 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I1229 06:57:17.856562 18214 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1229 06:57:17.856581 18214 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.101 APIServerPort:8441 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-563786 NodeName:functional-563786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.101"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.101 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1229 06:57:17.856672 18214 kubeadm.go:203] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.101
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "functional-563786"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.101"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.101"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceAutoProvision"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1229 06:57:17.856726 18214 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
I1229 06:57:17.868753 18214 binaries.go:51] Found k8s binaries, skipping transfer
I1229 06:57:17.868806 18214 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1229 06:57:17.883430 18214 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
I1229 06:57:17.905647 18214 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1229 06:57:17.925479 18214 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2085 bytes)
I1229 06:57:17.944840 18214 ssh_runner.go:195] Run: grep 192.168.39.101 control-plane.minikube.internal$ /etc/hosts
I1229 06:57:17.949870 18214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1229 06:57:18.137610 18214 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1229 06:57:18.153440 18214 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-9107/.minikube/profiles/functional-563786 for IP: 192.168.39.101
I1229 06:57:18.153451 18214 certs.go:195] generating shared ca certs ...
I1229 06:57:18.153468 18214 certs.go:227] acquiring lock for ca certs: {Name:mk85958246bde073b050e737b26a3d4cdddcda12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 06:57:18.153590 18214 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-9107/.minikube/ca.key
I1229 06:57:18.153632 18214 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-9107/.minikube/proxy-client-ca.key
I1229 06:57:18.153638 18214 certs.go:257] generating profile certs ...
I1229 06:57:18.153708 18214 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22353-9107/.minikube/profiles/functional-563786/client.key
I1229 06:57:18.153748 18214 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22353-9107/.minikube/profiles/functional-563786/apiserver.key.b4ea3171
I1229 06:57:18.153777 18214 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22353-9107/.minikube/profiles/functional-563786/proxy-client.key
I1229 06:57:18.153876 18214 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9107/.minikube/certs/13081.pem (1338 bytes)
W1229 06:57:18.153907 18214 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-9107/.minikube/certs/13081_empty.pem, impossibly tiny 0 bytes
I1229 06:57:18.153913 18214 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9107/.minikube/certs/ca-key.pem (1679 bytes)
I1229 06:57:18.153943 18214 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9107/.minikube/certs/ca.pem (1078 bytes)
I1229 06:57:18.153968 18214 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9107/.minikube/certs/cert.pem (1123 bytes)
I1229 06:57:18.153988 18214 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9107/.minikube/certs/key.pem (1675 bytes)
I1229 06:57:18.154022 18214 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-9107/.minikube/files/etc/ssl/certs/130812.pem (1708 bytes)
I1229 06:57:18.154591 18214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1229 06:57:18.186548 18214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1229 06:57:18.215915 18214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1229 06:57:18.246797 18214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1229 06:57:18.275848 18214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9107/.minikube/profiles/functional-563786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1229 06:57:18.304590 18214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9107/.minikube/profiles/functional-563786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1229 06:57:18.332736 18214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9107/.minikube/profiles/functional-563786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1229 06:57:18.362103 18214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9107/.minikube/profiles/functional-563786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1229 06:57:18.389754 18214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9107/.minikube/files/etc/ssl/certs/130812.pem --> /usr/share/ca-certificates/130812.pem (1708 bytes)
I1229 06:57:18.418885 18214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1229 06:57:18.447408 18214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-9107/.minikube/certs/13081.pem --> /usr/share/ca-certificates/13081.pem (1338 bytes)
I1229 06:57:18.476534 18214 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I1229 06:57:18.497051 18214 ssh_runner.go:195] Run: openssl version
I1229 06:57:18.503147 18214 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1229 06:57:18.514547 18214 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1229 06:57:18.526444 18214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1229 06:57:18.531127 18214 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:47 /usr/share/ca-certificates/minikubeCA.pem
I1229 06:57:18.531164 18214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1229 06:57:18.538041 18214 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1229 06:57:18.548869 18214 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13081.pem
I1229 06:57:18.559758 18214 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13081.pem /etc/ssl/certs/13081.pem
I1229 06:57:18.571221 18214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13081.pem
I1229 06:57:18.576500 18214 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:55 /usr/share/ca-certificates/13081.pem
I1229 06:57:18.576536 18214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13081.pem
I1229 06:57:18.583374 18214 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1229 06:57:18.594354 18214 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/130812.pem
I1229 06:57:18.605211 18214 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/130812.pem /etc/ssl/certs/130812.pem
I1229 06:57:18.618802 18214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130812.pem
I1229 06:57:18.623643 18214 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:55 /usr/share/ca-certificates/130812.pem
I1229 06:57:18.623683 18214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130812.pem
I1229 06:57:18.630845 18214 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1229 06:57:18.641371 18214 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1229 06:57:18.646248 18214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1229 06:57:18.653076 18214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1229 06:57:18.660785 18214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1229 06:57:18.667945 18214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1229 06:57:18.674775 18214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1229 06:57:18.681171 18214 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1229 06:57:18.687931 18214 kubeadm.go:401] StartCluster: {Name:functional-563786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22353/minikube-v1.37.0-1766979747-22353-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0 ClusterName:functional-563786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.101 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountStr
ing: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
I1229 06:57:18.688001 18214 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1229 06:57:18.688065 18214 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1229 06:57:18.720115 18214 cri.go:96] found id: "40b10d9048e9ac31e9700a8a80054e2bcb69946e913e151aace1fb05fd74cbe4"
I1229 06:57:18.720126 18214 cri.go:96] found id: "e2ce86e6a7ef4c9f2233f28f1d638cc870bd3ca2322c249a9499dce7c70eae60"
I1229 06:57:18.720130 18214 cri.go:96] found id: "dc7bfd1dafa9a77ece7526241359133444f7c9beb41bf6ea53bcee94d8e59235"
I1229 06:57:18.720133 18214 cri.go:96] found id: "35bb208a89dc3c1998119107bf3a70c3de26f92873fc010d0f8617bdfce6ef1f"
I1229 06:57:18.720138 18214 cri.go:96] found id: "3e35205395ee22ddc89be83a1535850e26e1157ae8e1042162c6305c1c7c8549"
I1229 06:57:18.720142 18214 cri.go:96] found id: "dab808763457f9111fd3c2dc04e428e5a9b222cab0172e8c82c685c135a8cc06"
I1229 06:57:18.720144 18214 cri.go:96] found id: "61ddb0d9c54e84133c945449c9377bee5e07ca9873a34ad8edb72c3401c91dac"
I1229 06:57:18.720147 18214 cri.go:96] found id: "bcad6071b4f348eb21726ed97b9f8d1aca6e98395b13491837f1bbdbf4abeb23"
I1229 06:57:18.720149 18214 cri.go:96] found id: "48d9f4b10f96792536396c16281d21fbd79db5b5ecd411648cd1471dccac3a68"
I1229 06:57:18.720156 18214 cri.go:96] found id: "006414001ceef4c4e508c64f65298451e14ba77612373a997d3e0060d35c9997"
I1229 06:57:18.720159 18214 cri.go:96] found id: "f931ebbd895ec230cc0e8c60382962e9095613156a0f884e631e36d948565d0d"
I1229 06:57:18.720162 18214 cri.go:96] found id: "cd90aef3dcd013dd954da2ee0bdd03a82cf559c67a30b754617c86475d4db308"
I1229 06:57:18.720165 18214 cri.go:96] found id: "e541924894fc85da111ad761573ea8e7a0d45211a1403b37fdd0665546db6530"
I1229 06:57:18.720192 18214 cri.go:96] found id: "6337d68c18ec84d3bd5de0f012b8d346c605036b94c78a6fdc5fd35cd26bd742"
I1229 06:57:18.720195 18214 cri.go:96] found id: ""
I1229 06:57:18.720229 18214 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I1229 06:57:18.749043 18214 cri.go:123] JSON = [{"ociVersion":"1.3.0","id":"10fbb4f71ebd45ab780354627aef995a4bbbdb1f43bcb9f4920c50aa257dfe23","pid":1295,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10fbb4f71ebd45ab780354627aef995a4bbbdb1f43bcb9f4920c50aa257dfe23","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10fbb4f71ebd45ab780354627aef995a4bbbdb1f43bcb9f4920c50aa257dfe23/rootfs","created":"2025-12-29T06:55:31.48385111Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"10fbb4f71ebd45ab780354627aef995a4bbbdb1f43bcb9f4920c50aa257dfe23","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-563786_8eac2ae3621cd23e1820ec119dd0a660","io.kubernetes.cri.sandbox-memor
y":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-563786","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"8eac2ae3621cd23e1820ec119dd0a660"},"owner":"root"},{"ociVersion":"1.3.0","id":"1a5b4e995ef06ea34b68b9ec6216df7bbc888b0af6b90bff636f2389cb283b00","pid":1976,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a5b4e995ef06ea34b68b9ec6216df7bbc888b0af6b90bff636f2389cb283b00","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a5b4e995ef06ea34b68b9ec6216df7bbc888b0af6b90bff636f2389cb283b00/rootfs","created":"2025-12-29T06:55:43.222650299Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"1a5b4e995ef06ea34b68b9ec6216df7bbc888b0af6b90bff636f2389cb283b00","io.kubernetes.
cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-7d764666f9-xhjq7_6219650d-fd31-477c-9dda-a2cef0d5268c","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-7d764666f9-xhjq7","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"6219650d-fd31-477c-9dda-a2cef0d5268c"},"owner":"root"},{"ociVersion":"1.3.0","id":"1d08901b3819b89e4a1d25b543740d603365466ffd74653fa5be74d198ebcb94","pid":2236,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d08901b3819b89e4a1d25b543740d603365466ffd74653fa5be74d198ebcb94","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d08901b3819b89e4a1d25b543740d603365466ffd74653fa5be74d198ebcb94/rootfs","created":"2025-12-29T06:55:44.445000332Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.k
ubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"1d08901b3819b89e4a1d25b543740d603365466ffd74653fa5be74d198ebcb94","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_ecdc8113-cbc5-4e5d-99ca-8beaca3cf1ff","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ecdc8113-cbc5-4e5d-99ca-8beaca3cf1ff"},"owner":"root"},{"ociVersion":"1.3.0","id":"3e35205395ee22ddc89be83a1535850e26e1157ae8e1042162c6305c1c7c8549","pid":3244,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3e35205395ee22ddc89be83a1535850e26e1157ae8e1042162c6305c1c7c8549","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3e35205395ee22ddc89be83a1535850e26e1157ae8e1042162c6305c1c7c8549/rootfs","created":"2025-12-29T06:56:32.370934174Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-typ
e":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.35.0","io.kubernetes.cri.sandbox-id":"4bbfba7f7071d70f16e0e25fa128f29516b20c0f9d2f22ed72764e352abfcee5","io.kubernetes.cri.sandbox-name":"kube-proxy-p249l","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1e1b3654-47a0-4d76-b9e1-406a1865af8d"},"owner":"root"},{"ociVersion":"1.3.0","id":"40b10d9048e9ac31e9700a8a80054e2bcb69946e913e151aace1fb05fd74cbe4","pid":3726,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/40b10d9048e9ac31e9700a8a80054e2bcb69946e913e151aace1fb05fd74cbe4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/40b10d9048e9ac31e9700a8a80054e2bcb69946e913e151aace1fb05fd74cbe4/rootfs","created":"2025-12-29T06:56:50.174002773Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.35.0","io.kubernetes.
cri.sandbox-id":"aa950e01fc809e1312748fddd69b2744c965c6ba8a4153be392ea7cb59611c24","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-563786","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"559713a5edba88d55769bbc4215c2088"},"owner":"root"},{"ociVersion":"1.3.0","id":"43ff17dde92484262fb110f0f80263cf590a4802c3c469ed8e9c54cd32454255","pid":1318,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/43ff17dde92484262fb110f0f80263cf590a4802c3c469ed8e9c54cd32454255","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/43ff17dde92484262fb110f0f80263cf590a4802c3c469ed8e9c54cd32454255/rootfs","created":"2025-12-29T06:55:31.502106906Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-i
d":"43ff17dde92484262fb110f0f80263cf590a4802c3c469ed8e9c54cd32454255","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-563786_3bd99c894112d2d736983c12c82e62c4","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-563786","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"3bd99c894112d2d736983c12c82e62c4"},"owner":"root"},{"ociVersion":"1.3.0","id":"4bbfba7f7071d70f16e0e25fa128f29516b20c0f9d2f22ed72764e352abfcee5","pid":1764,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4bbfba7f7071d70f16e0e25fa128f29516b20c0f9d2f22ed72764e352abfcee5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4bbfba7f7071d70f16e0e25fa128f29516b20c0f9d2f22ed72764e352abfcee5/rootfs","created":"2025-12-29T06:55:42.745853879Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu
-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"4bbfba7f7071d70f16e0e25fa128f29516b20c0f9d2f22ed72764e352abfcee5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-p249l_1e1b3654-47a0-4d76-b9e1-406a1865af8d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-p249l","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1e1b3654-47a0-4d76-b9e1-406a1865af8d"},"owner":"root"},{"ociVersion":"1.3.0","id":"61ddb0d9c54e84133c945449c9377bee5e07ca9873a34ad8edb72c3401c91dac","pid":3211,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/61ddb0d9c54e84133c945449c9377bee5e07ca9873a34ad8edb72c3401c91dac","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/61ddb0d9c54e84133c945449c9377bee5e07ca9873a34ad8edb72c3401c91dac/rootfs","created":"2025-12-29T06:56:32.212610648Z","annotations":{"io.kubernetes.cri.co
ntainer-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.35.0","io.kubernetes.cri.sandbox-id":"787a7be6766e84788625cd2bcecae5ac55db8e1e3b27d126063494ac29dfc733","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-563786","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f1ec435614d6727ca3cb6e374deb5ab0"},"owner":"root"},{"ociVersion":"1.3.0","id":"787a7be6766e84788625cd2bcecae5ac55db8e1e3b27d126063494ac29dfc733","pid":1346,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/787a7be6766e84788625cd2bcecae5ac55db8e1e3b27d126063494ac29dfc733","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/787a7be6766e84788625cd2bcecae5ac55db8e1e3b27d126063494ac29dfc733/rootfs","created":"2025-12-29T06:55:31.588204699Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kub
ernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"787a7be6766e84788625cd2bcecae5ac55db8e1e3b27d126063494ac29dfc733","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-563786_f1ec435614d6727ca3cb6e374deb5ab0","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-563786","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f1ec435614d6727ca3cb6e374deb5ab0"},"owner":"root"},{"ociVersion":"1.3.0","id":"aa950e01fc809e1312748fddd69b2744c965c6ba8a4153be392ea7cb59611c24","pid":1333,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa950e01fc809e1312748fddd69b2744c965c6ba8a4153be392ea7cb59611c24","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa950e01fc809e1312748fddd69b2744c965c6ba8a4153be392ea7cb59611c24/rootfs","created":"2025-12-29T06:55:31
.529676437Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"aa950e01fc809e1312748fddd69b2744c965c6ba8a4153be392ea7cb59611c24","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-563786_559713a5edba88d55769bbc4215c2088","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-563786","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"559713a5edba88d55769bbc4215c2088"},"owner":"root"},{"ociVersion":"1.3.0","id":"bcad6071b4f348eb21726ed97b9f8d1aca6e98395b13491837f1bbdbf4abeb23","pid":3039,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bcad6071b4f348eb21726ed97b9f8d1aca6e98395b13491837f1bbdbf4abe
b23","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bcad6071b4f348eb21726ed97b9f8d1aca6e98395b13491837f1bbdbf4abeb23/rootfs","created":"2025-12-29T06:56:27.117564181Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"1d08901b3819b89e4a1d25b543740d603365466ffd74653fa5be74d198ebcb94","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ecdc8113-cbc5-4e5d-99ca-8beaca3cf1ff"},"owner":"root"},{"ociVersion":"1.3.0","id":"dab808763457f9111fd3c2dc04e428e5a9b222cab0172e8c82c685c135a8cc06","pid":3246,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dab808763457f9111fd3c2dc04e428e5a9b222cab0172e8c82c685c135a8cc06","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dab808763457f9111fd3c2dc04e428e5a9b2
22cab0172e8c82c685c135a8cc06/rootfs","created":"2025-12-29T06:56:32.25056588Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.13.1","io.kubernetes.cri.sandbox-id":"1a5b4e995ef06ea34b68b9ec6216df7bbc888b0af6b90bff636f2389cb283b00","io.kubernetes.cri.sandbox-name":"coredns-7d764666f9-xhjq7","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"6219650d-fd31-477c-9dda-a2cef0d5268c"},"owner":"root"},{"ociVersion":"1.3.0","id":"dc7bfd1dafa9a77ece7526241359133444f7c9beb41bf6ea53bcee94d8e59235","pid":3440,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc7bfd1dafa9a77ece7526241359133444f7c9beb41bf6ea53bcee94d8e59235","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc7bfd1dafa9a77ece7526241359133444f7c9beb41bf6ea53bcee94d8e59235/rootfs","created":"2025-12-29T06:56:39.184270786Z","annotations":{"io.kubernetes.cri.c
ontainer-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.6-0","io.kubernetes.cri.sandbox-id":"43ff17dde92484262fb110f0f80263cf590a4802c3c469ed8e9c54cd32454255","io.kubernetes.cri.sandbox-name":"etcd-functional-563786","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"3bd99c894112d2d736983c12c82e62c4"},"owner":"root"},{"ociVersion":"1.3.0","id":"e2ce86e6a7ef4c9f2233f28f1d638cc870bd3ca2322c249a9499dce7c70eae60","pid":3719,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2ce86e6a7ef4c9f2233f28f1d638cc870bd3ca2322c249a9499dce7c70eae60","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2ce86e6a7ef4c9f2233f28f1d638cc870bd3ca2322c249a9499dce7c70eae60/rootfs","created":"2025-12-29T06:56:50.163171497Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-ap
iserver:v1.35.0","io.kubernetes.cri.sandbox-id":"10fbb4f71ebd45ab780354627aef995a4bbbdb1f43bcb9f4920c50aa257dfe23","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-563786","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"8eac2ae3621cd23e1820ec119dd0a660"},"owner":"root"}]
I1229 06:57:18.749221 18214 cri.go:133] list returned 14 containers
I1229 06:57:18.749228 18214 cri.go:136] container: {ID:10fbb4f71ebd45ab780354627aef995a4bbbdb1f43bcb9f4920c50aa257dfe23 Status:running}
I1229 06:57:18.749239 18214 cri.go:138] skipping 10fbb4f71ebd45ab780354627aef995a4bbbdb1f43bcb9f4920c50aa257dfe23 - not in ps
I1229 06:57:18.749242 18214 cri.go:136] container: {ID:1a5b4e995ef06ea34b68b9ec6216df7bbc888b0af6b90bff636f2389cb283b00 Status:running}
I1229 06:57:18.749245 18214 cri.go:138] skipping 1a5b4e995ef06ea34b68b9ec6216df7bbc888b0af6b90bff636f2389cb283b00 - not in ps
I1229 06:57:18.749247 18214 cri.go:136] container: {ID:1d08901b3819b89e4a1d25b543740d603365466ffd74653fa5be74d198ebcb94 Status:running}
I1229 06:57:18.749250 18214 cri.go:138] skipping 1d08901b3819b89e4a1d25b543740d603365466ffd74653fa5be74d198ebcb94 - not in ps
I1229 06:57:18.749251 18214 cri.go:136] container: {ID:3e35205395ee22ddc89be83a1535850e26e1157ae8e1042162c6305c1c7c8549 Status:running}
I1229 06:57:18.749256 18214 cri.go:142] skipping {3e35205395ee22ddc89be83a1535850e26e1157ae8e1042162c6305c1c7c8549 running}: state = "running", want "paused"
I1229 06:57:18.749262 18214 cri.go:136] container: {ID:40b10d9048e9ac31e9700a8a80054e2bcb69946e913e151aace1fb05fd74cbe4 Status:running}
I1229 06:57:18.749265 18214 cri.go:142] skipping {40b10d9048e9ac31e9700a8a80054e2bcb69946e913e151aace1fb05fd74cbe4 running}: state = "running", want "paused"
I1229 06:57:18.749268 18214 cri.go:136] container: {ID:43ff17dde92484262fb110f0f80263cf590a4802c3c469ed8e9c54cd32454255 Status:running}
I1229 06:57:18.749271 18214 cri.go:138] skipping 43ff17dde92484262fb110f0f80263cf590a4802c3c469ed8e9c54cd32454255 - not in ps
I1229 06:57:18.749274 18214 cri.go:136] container: {ID:4bbfba7f7071d70f16e0e25fa128f29516b20c0f9d2f22ed72764e352abfcee5 Status:running}
I1229 06:57:18.749276 18214 cri.go:138] skipping 4bbfba7f7071d70f16e0e25fa128f29516b20c0f9d2f22ed72764e352abfcee5 - not in ps
I1229 06:57:18.749277 18214 cri.go:136] container: {ID:61ddb0d9c54e84133c945449c9377bee5e07ca9873a34ad8edb72c3401c91dac Status:running}
I1229 06:57:18.749281 18214 cri.go:142] skipping {61ddb0d9c54e84133c945449c9377bee5e07ca9873a34ad8edb72c3401c91dac running}: state = "running", want "paused"
I1229 06:57:18.749284 18214 cri.go:136] container: {ID:787a7be6766e84788625cd2bcecae5ac55db8e1e3b27d126063494ac29dfc733 Status:running}
I1229 06:57:18.749286 18214 cri.go:138] skipping 787a7be6766e84788625cd2bcecae5ac55db8e1e3b27d126063494ac29dfc733 - not in ps
I1229 06:57:18.749289 18214 cri.go:136] container: {ID:aa950e01fc809e1312748fddd69b2744c965c6ba8a4153be392ea7cb59611c24 Status:running}
I1229 06:57:18.749292 18214 cri.go:138] skipping aa950e01fc809e1312748fddd69b2744c965c6ba8a4153be392ea7cb59611c24 - not in ps
I1229 06:57:18.749294 18214 cri.go:136] container: {ID:bcad6071b4f348eb21726ed97b9f8d1aca6e98395b13491837f1bbdbf4abeb23 Status:running}
I1229 06:57:18.749298 18214 cri.go:142] skipping {bcad6071b4f348eb21726ed97b9f8d1aca6e98395b13491837f1bbdbf4abeb23 running}: state = "running", want "paused"
I1229 06:57:18.749301 18214 cri.go:136] container: {ID:dab808763457f9111fd3c2dc04e428e5a9b222cab0172e8c82c685c135a8cc06 Status:running}
I1229 06:57:18.749304 18214 cri.go:142] skipping {dab808763457f9111fd3c2dc04e428e5a9b222cab0172e8c82c685c135a8cc06 running}: state = "running", want "paused"
I1229 06:57:18.749307 18214 cri.go:136] container: {ID:dc7bfd1dafa9a77ece7526241359133444f7c9beb41bf6ea53bcee94d8e59235 Status:running}
I1229 06:57:18.749311 18214 cri.go:142] skipping {dc7bfd1dafa9a77ece7526241359133444f7c9beb41bf6ea53bcee94d8e59235 running}: state = "running", want "paused"
I1229 06:57:18.749314 18214 cri.go:136] container: {ID:e2ce86e6a7ef4c9f2233f28f1d638cc870bd3ca2322c249a9499dce7c70eae60 Status:running}
I1229 06:57:18.749317 18214 cri.go:142] skipping {e2ce86e6a7ef4c9f2233f28f1d638cc870bd3ca2322c249a9499dce7c70eae60 running}: state = "running", want "paused"
I1229 06:57:18.749351 18214 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1229 06:57:18.761692 18214 kubeadm.go:417] found existing configuration files, will attempt cluster restart
I1229 06:57:18.761700 18214 kubeadm.go:598] restartPrimaryControlPlane start ...
I1229 06:57:18.761745 18214 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1229 06:57:18.772907 18214 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1229 06:57:18.773537 18214 kubeconfig.go:125] found "functional-563786" server: "https://192.168.39.101:8441"
I1229 06:57:18.775003 18214 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1229 06:57:18.785200 18214 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml
+++ /var/tmp/minikube/kubeadm.yaml.new
@@ -24,7 +24,7 @@
certSANs: ["127.0.0.1", "localhost", "192.168.39.101"]
extraArgs:
- name: "enable-admission-plugins"
- value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
+ value: "NamespaceAutoProvision"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
-- /stdout --
I1229 06:57:18.785206 18214 kubeadm.go:1161] stopping kube-system containers ...
I1229 06:57:18.785215 18214 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I1229 06:57:18.785247 18214 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1229 06:57:18.820231 18214 cri.go:96] found id: "40b10d9048e9ac31e9700a8a80054e2bcb69946e913e151aace1fb05fd74cbe4"
I1229 06:57:18.820244 18214 cri.go:96] found id: "e2ce86e6a7ef4c9f2233f28f1d638cc870bd3ca2322c249a9499dce7c70eae60"
I1229 06:57:18.820249 18214 cri.go:96] found id: "dc7bfd1dafa9a77ece7526241359133444f7c9beb41bf6ea53bcee94d8e59235"
I1229 06:57:18.820253 18214 cri.go:96] found id: "35bb208a89dc3c1998119107bf3a70c3de26f92873fc010d0f8617bdfce6ef1f"
I1229 06:57:18.820256 18214 cri.go:96] found id: "3e35205395ee22ddc89be83a1535850e26e1157ae8e1042162c6305c1c7c8549"
I1229 06:57:18.820259 18214 cri.go:96] found id: "dab808763457f9111fd3c2dc04e428e5a9b222cab0172e8c82c685c135a8cc06"
I1229 06:57:18.820262 18214 cri.go:96] found id: "61ddb0d9c54e84133c945449c9377bee5e07ca9873a34ad8edb72c3401c91dac"
I1229 06:57:18.820265 18214 cri.go:96] found id: "bcad6071b4f348eb21726ed97b9f8d1aca6e98395b13491837f1bbdbf4abeb23"
I1229 06:57:18.820268 18214 cri.go:96] found id: "48d9f4b10f96792536396c16281d21fbd79db5b5ecd411648cd1471dccac3a68"
I1229 06:57:18.820276 18214 cri.go:96] found id: "006414001ceef4c4e508c64f65298451e14ba77612373a997d3e0060d35c9997"
I1229 06:57:18.820279 18214 cri.go:96] found id: "f931ebbd895ec230cc0e8c60382962e9095613156a0f884e631e36d948565d0d"
I1229 06:57:18.820280 18214 cri.go:96] found id: "cd90aef3dcd013dd954da2ee0bdd03a82cf559c67a30b754617c86475d4db308"
I1229 06:57:18.820282 18214 cri.go:96] found id: "e541924894fc85da111ad761573ea8e7a0d45211a1403b37fdd0665546db6530"
I1229 06:57:18.820284 18214 cri.go:96] found id: "6337d68c18ec84d3bd5de0f012b8d346c605036b94c78a6fdc5fd35cd26bd742"
I1229 06:57:18.820285 18214 cri.go:96] found id: ""
I1229 06:57:18.820289 18214 cri.go:274] Stopping containers: [40b10d9048e9ac31e9700a8a80054e2bcb69946e913e151aace1fb05fd74cbe4 e2ce86e6a7ef4c9f2233f28f1d638cc870bd3ca2322c249a9499dce7c70eae60 dc7bfd1dafa9a77ece7526241359133444f7c9beb41bf6ea53bcee94d8e59235 35bb208a89dc3c1998119107bf3a70c3de26f92873fc010d0f8617bdfce6ef1f 3e35205395ee22ddc89be83a1535850e26e1157ae8e1042162c6305c1c7c8549 dab808763457f9111fd3c2dc04e428e5a9b222cab0172e8c82c685c135a8cc06 61ddb0d9c54e84133c945449c9377bee5e07ca9873a34ad8edb72c3401c91dac bcad6071b4f348eb21726ed97b9f8d1aca6e98395b13491837f1bbdbf4abeb23 48d9f4b10f96792536396c16281d21fbd79db5b5ecd411648cd1471dccac3a68 006414001ceef4c4e508c64f65298451e14ba77612373a997d3e0060d35c9997 f931ebbd895ec230cc0e8c60382962e9095613156a0f884e631e36d948565d0d cd90aef3dcd013dd954da2ee0bdd03a82cf559c67a30b754617c86475d4db308 e541924894fc85da111ad761573ea8e7a0d45211a1403b37fdd0665546db6530 6337d68c18ec84d3bd5de0f012b8d346c605036b94c78a6fdc5fd35cd26bd742]
I1229 06:57:18.820332 18214 ssh_runner.go:195] Run: which crictl
I1229 06:57:18.824729 18214 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 40b10d9048e9ac31e9700a8a80054e2bcb69946e913e151aace1fb05fd74cbe4 e2ce86e6a7ef4c9f2233f28f1d638cc870bd3ca2322c249a9499dce7c70eae60 dc7bfd1dafa9a77ece7526241359133444f7c9beb41bf6ea53bcee94d8e59235 35bb208a89dc3c1998119107bf3a70c3de26f92873fc010d0f8617bdfce6ef1f 3e35205395ee22ddc89be83a1535850e26e1157ae8e1042162c6305c1c7c8549 dab808763457f9111fd3c2dc04e428e5a9b222cab0172e8c82c685c135a8cc06 61ddb0d9c54e84133c945449c9377bee5e07ca9873a34ad8edb72c3401c91dac bcad6071b4f348eb21726ed97b9f8d1aca6e98395b13491837f1bbdbf4abeb23 48d9f4b10f96792536396c16281d21fbd79db5b5ecd411648cd1471dccac3a68 006414001ceef4c4e508c64f65298451e14ba77612373a997d3e0060d35c9997 f931ebbd895ec230cc0e8c60382962e9095613156a0f884e631e36d948565d0d cd90aef3dcd013dd954da2ee0bdd03a82cf559c67a30b754617c86475d4db308 e541924894fc85da111ad761573ea8e7a0d45211a1403b37fdd0665546db6530 6337d68c18ec84d3bd5de0f012b8d346c605036b94c78a6fdc5fd35cd26bd742
I1229 06:57:34.329876 18214 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 40b10d9048e9ac31e9700a8a80054e2bcb69946e913e151aace1fb05fd74cbe4 e2ce86e6a7ef4c9f2233f28f1d638cc870bd3ca2322c249a9499dce7c70eae60 dc7bfd1dafa9a77ece7526241359133444f7c9beb41bf6ea53bcee94d8e59235 35bb208a89dc3c1998119107bf3a70c3de26f92873fc010d0f8617bdfce6ef1f 3e35205395ee22ddc89be83a1535850e26e1157ae8e1042162c6305c1c7c8549 dab808763457f9111fd3c2dc04e428e5a9b222cab0172e8c82c685c135a8cc06 61ddb0d9c54e84133c945449c9377bee5e07ca9873a34ad8edb72c3401c91dac bcad6071b4f348eb21726ed97b9f8d1aca6e98395b13491837f1bbdbf4abeb23 48d9f4b10f96792536396c16281d21fbd79db5b5ecd411648cd1471dccac3a68 006414001ceef4c4e508c64f65298451e14ba77612373a997d3e0060d35c9997 f931ebbd895ec230cc0e8c60382962e9095613156a0f884e631e36d948565d0d cd90aef3dcd013dd954da2ee0bdd03a82cf559c67a30b754617c86475d4db308 e541924894fc85da111ad761573ea8e7a0d45211a1403b37fdd0665546db6530 6337d68c18ec84d3bd5de0f012b8d346c605036b94c78a6fdc5fd35cd26bd742: (15.5
05104696s)
I1229 06:57:34.329939 18214 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I1229 06:57:34.362534 18214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1229 06:57:34.374678 18214 kubeadm.go:158] found existing configuration files:
-rw------- 1 root root 5631 Dec 29 06:55 /etc/kubernetes/admin.conf
-rw------- 1 root root 5638 Dec 29 06:56 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 5674 Dec 29 06:56 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5586 Dec 29 06:56 /etc/kubernetes/scheduler.conf
I1229 06:57:34.374726 18214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1229 06:57:34.385672 18214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1229 06:57:34.396003 18214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
stdout:
stderr:
I1229 06:57:34.396036 18214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1229 06:57:34.408197 18214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1229 06:57:34.418172 18214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I1229 06:57:34.418221 18214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1229 06:57:34.429032 18214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1229 06:57:34.439119 18214 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I1229 06:57:34.439149 18214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1229 06:57:34.450064 18214 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1229 06:57:34.461135 18214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I1229 06:57:34.511993 18214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I1229 06:57:34.815545 18214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I1229 06:57:35.080668 18214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I1229 06:57:35.146725 18214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I1229 06:57:35.210803 18214 api_server.go:52] waiting for apiserver process to appear ...
I1229 06:57:35.210858 18214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1229 06:57:35.711799 18214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1229 06:57:36.211080 18214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1229 06:57:36.243462 18214 api_server.go:72] duration metric: took 1.032669261s to wait for apiserver process to appear ...
I1229 06:57:36.243479 18214 api_server.go:88] waiting for apiserver healthz status ...
I1229 06:57:36.243499 18214 api_server.go:299] Checking apiserver healthz at https://192.168.39.101:8441/healthz ...
I1229 06:57:37.429133 18214 api_server.go:325] https://192.168.39.101:8441/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W1229 06:57:37.429152 18214 api_server.go:103] status: https://192.168.39.101:8441/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I1229 06:57:37.429168 18214 api_server.go:299] Checking apiserver healthz at https://192.168.39.101:8441/healthz ...
I1229 06:57:37.471281 18214 api_server.go:325] https://192.168.39.101:8441/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W1229 06:57:37.471298 18214 api_server.go:103] status: https://192.168.39.101:8441/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I1229 06:57:37.743697 18214 api_server.go:299] Checking apiserver healthz at https://192.168.39.101:8441/healthz ...
I1229 06:57:37.748431 18214 api_server.go:325] https://192.168.39.101:8441/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1229 06:57:37.748444 18214 api_server.go:103] status: https://192.168.39.101:8441/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1229 06:57:38.244062 18214 api_server.go:299] Checking apiserver healthz at https://192.168.39.101:8441/healthz ...
I1229 06:57:38.249813 18214 api_server.go:325] https://192.168.39.101:8441/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1229 06:57:38.249830 18214 api_server.go:103] status: https://192.168.39.101:8441/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1229 06:57:38.744543 18214 api_server.go:299] Checking apiserver healthz at https://192.168.39.101:8441/healthz ...
I1229 06:57:38.754006 18214 api_server.go:325] https://192.168.39.101:8441/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1229 06:57:38.754023 18214 api_server.go:103] status: https://192.168.39.101:8441/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1229 06:57:39.243672 18214 api_server.go:299] Checking apiserver healthz at https://192.168.39.101:8441/healthz ...
I1229 06:57:39.252469 18214 api_server.go:325] https://192.168.39.101:8441/healthz returned 200:
ok
I1229 06:57:39.272184 18214 api_server.go:141] control plane version: v1.35.0
I1229 06:57:39.272201 18214 api_server.go:131] duration metric: took 3.028717439s to wait for apiserver health ...
I1229 06:57:39.272209 18214 cni.go:84] Creating CNI manager for ""
I1229 06:57:39.272214 18214 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I1229 06:57:39.273795 18214 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1229 06:57:39.274881 18214 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1229 06:57:39.297638 18214 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1229 06:57:39.349946 18214 system_pods.go:43] waiting for kube-system pods to appear ...
I1229 06:57:39.363562 18214 system_pods.go:59] 7 kube-system pods found
I1229 06:57:39.363590 18214 system_pods.go:61] "coredns-7d764666f9-xhjq7" [6219650d-fd31-477c-9dda-a2cef0d5268c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1229 06:57:39.363595 18214 system_pods.go:61] "etcd-functional-563786" [0d0e35e8-e307-48af-9347-9989481710c2] Running
I1229 06:57:39.363603 18214 system_pods.go:61] "kube-apiserver-functional-563786" [5d450d48-8c6b-4538-a796-b453289729f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1229 06:57:39.363606 18214 system_pods.go:61] "kube-controller-manager-functional-563786" [c6897f81-6b5f-4857-b756-f41e6285d56e] Running
I1229 06:57:39.363609 18214 system_pods.go:61] "kube-proxy-p249l" [1e1b3654-47a0-4d76-b9e1-406a1865af8d] Running
I1229 06:57:39.363613 18214 system_pods.go:61] "kube-scheduler-functional-563786" [b1e5de45-df8f-4b62-9c7a-0bc10fd574f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1229 06:57:39.363616 18214 system_pods.go:61] "storage-provisioner" [ecdc8113-cbc5-4e5d-99ca-8beaca3cf1ff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1229 06:57:39.363622 18214 system_pods.go:74] duration metric: took 13.66665ms to wait for pod list to return data ...
I1229 06:57:39.363627 18214 node_conditions.go:102] verifying NodePressure condition ...
I1229 06:57:39.379697 18214 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1229 06:57:39.379720 18214 node_conditions.go:123] node cpu capacity is 2
I1229 06:57:39.379734 18214 node_conditions.go:105] duration metric: took 16.103901ms to run NodePressure ...
I1229 06:57:39.379778 18214 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I1229 06:57:39.671582 18214 kubeadm.go:729] waiting for restarted kubelet to initialise ...
I1229 06:57:39.676523 18214 kubeadm.go:744] kubelet initialised
I1229 06:57:39.676532 18214 kubeadm.go:745] duration metric: took 4.934638ms waiting for restarted kubelet to initialise ...
I1229 06:57:39.676551 18214 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1229 06:57:39.693471 18214 ops.go:34] apiserver oom_adj: -16
I1229 06:57:39.693478 18214 kubeadm.go:602] duration metric: took 20.931774217s to restartPrimaryControlPlane
I1229 06:57:39.693484 18214 kubeadm.go:403] duration metric: took 21.005560923s to StartCluster
I1229 06:57:39.693497 18214 settings.go:142] acquiring lock: {Name:mk039e256278d8334b5807f96b013f082c700455 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 06:57:39.693568 18214 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22353-9107/kubeconfig
I1229 06:57:39.694062 18214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-9107/kubeconfig: {Name:mk9c93c77f9f5280a6a3462248cdab396ec65bd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1229 06:57:39.694289 18214 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.101 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1229 06:57:39.694392 18214 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1229 06:57:39.694512 18214 addons.go:70] Setting storage-provisioner=true in profile "functional-563786"
I1229 06:57:39.694531 18214 addons.go:239] Setting addon storage-provisioner=true in "functional-563786"
W1229 06:57:39.694536 18214 addons.go:248] addon storage-provisioner should already be in state true
I1229 06:57:39.694536 18214 config.go:182] Loaded profile config "functional-563786": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1229 06:57:39.694536 18214 addons.go:70] Setting default-storageclass=true in profile "functional-563786"
I1229 06:57:39.694557 18214 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-563786"
I1229 06:57:39.694565 18214 host.go:66] Checking if "functional-563786" exists ...
I1229 06:57:39.695850 18214 out.go:179] * Verifying Kubernetes components...
I1229 06:57:39.697286 18214 addons.go:239] Setting addon default-storageclass=true in "functional-563786"
W1229 06:57:39.697296 18214 addons.go:248] addon default-storageclass should already be in state true
I1229 06:57:39.697315 18214 host.go:66] Checking if "functional-563786" exists ...
I1229 06:57:39.698109 18214 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1229 06:57:39.698172 18214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1229 06:57:39.698850 18214 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1229 06:57:39.698857 18214 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1229 06:57:39.699219 18214 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1229 06:57:39.699225 18214 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1229 06:57:39.701867 18214 main.go:144] libmachine: domain functional-563786 has defined MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:39.702106 18214 main.go:144] libmachine: domain functional-563786 has defined MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:39.702163 18214 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7f:43:99", ip: ""} in network mk-functional-563786: {Iface:virbr1 ExpiryTime:2025-12-29 07:55:20 +0000 UTC Type:0 Mac:52:54:00:7f:43:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:functional-563786 Clientid:01:52:54:00:7f:43:99}
I1229 06:57:39.702187 18214 main.go:144] libmachine: domain functional-563786 has defined IP address 192.168.39.101 and MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:39.702296 18214 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9107/.minikube/machines/functional-563786/id_rsa Username:docker}
I1229 06:57:39.702550 18214 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7f:43:99", ip: ""} in network mk-functional-563786: {Iface:virbr1 ExpiryTime:2025-12-29 07:55:20 +0000 UTC Type:0 Mac:52:54:00:7f:43:99 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:functional-563786 Clientid:01:52:54:00:7f:43:99}
I1229 06:57:39.702565 18214 main.go:144] libmachine: domain functional-563786 has defined IP address 192.168.39.101 and MAC address 52:54:00:7f:43:99 in network mk-functional-563786
I1229 06:57:39.702699 18214 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22353-9107/.minikube/machines/functional-563786/id_rsa Username:docker}
I1229 06:57:39.900420 18214 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1229 06:57:39.922518 18214 node_ready.go:35] waiting up to 6m0s for node "functional-563786" to be "Ready" ...
I1229 06:57:39.925283 18214 node_ready.go:49] node "functional-563786" is "Ready"
I1229 06:57:39.925293 18214 node_ready.go:38] duration metric: took 2.756209ms for node "functional-563786" to be "Ready" ...
I1229 06:57:39.925302 18214 api_server.go:52] waiting for apiserver process to appear ...
I1229 06:57:39.925340 18214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1229 06:57:39.950236 18214 api_server.go:72] duration metric: took 255.92534ms to wait for apiserver process to appear ...
I1229 06:57:39.950248 18214 api_server.go:88] waiting for apiserver healthz status ...
I1229 06:57:39.950261 18214 api_server.go:299] Checking apiserver healthz at https://192.168.39.101:8441/healthz ...
I1229 06:57:39.957128 18214 api_server.go:325] https://192.168.39.101:8441/healthz returned 200:
ok
I1229 06:57:39.958538 18214 api_server.go:141] control plane version: v1.35.0
I1229 06:57:39.958548 18214 api_server.go:131] duration metric: took 8.296595ms to wait for apiserver health ...
I1229 06:57:39.958554 18214 system_pods.go:43] waiting for kube-system pods to appear ...
I1229 06:57:39.961066 18214 system_pods.go:59] 7 kube-system pods found
I1229 06:57:39.961079 18214 system_pods.go:61] "coredns-7d764666f9-xhjq7" [6219650d-fd31-477c-9dda-a2cef0d5268c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1229 06:57:39.961083 18214 system_pods.go:61] "etcd-functional-563786" [0d0e35e8-e307-48af-9347-9989481710c2] Running
I1229 06:57:39.961089 18214 system_pods.go:61] "kube-apiserver-functional-563786" [5d5f8ff9-7af1-4f2f-a659-8dd003d4b9f0] Pending
I1229 06:57:39.961093 18214 system_pods.go:61] "kube-controller-manager-functional-563786" [c6897f81-6b5f-4857-b756-f41e6285d56e] Running
I1229 06:57:39.961096 18214 system_pods.go:61] "kube-proxy-p249l" [1e1b3654-47a0-4d76-b9e1-406a1865af8d] Running
I1229 06:57:39.961100 18214 system_pods.go:61] "kube-scheduler-functional-563786" [b1e5de45-df8f-4b62-9c7a-0bc10fd574f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1229 06:57:39.961103 18214 system_pods.go:61] "storage-provisioner" [ecdc8113-cbc5-4e5d-99ca-8beaca3cf1ff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1229 06:57:39.961106 18214 system_pods.go:74] duration metric: took 2.549234ms to wait for pod list to return data ...
I1229 06:57:39.961111 18214 default_sa.go:34] waiting for default service account to be created ...
I1229 06:57:39.963716 18214 default_sa.go:45] found service account: "default"
I1229 06:57:39.963723 18214 default_sa.go:55] duration metric: took 2.608625ms for default service account to be created ...
I1229 06:57:39.963728 18214 system_pods.go:116] waiting for k8s-apps to be running ...
I1229 06:57:39.966342 18214 system_pods.go:86] 7 kube-system pods found
I1229 06:57:39.966354 18214 system_pods.go:89] "coredns-7d764666f9-xhjq7" [6219650d-fd31-477c-9dda-a2cef0d5268c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1229 06:57:39.966357 18214 system_pods.go:89] "etcd-functional-563786" [0d0e35e8-e307-48af-9347-9989481710c2] Running
I1229 06:57:39.966362 18214 system_pods.go:89] "kube-apiserver-functional-563786" [5d5f8ff9-7af1-4f2f-a659-8dd003d4b9f0] Pending
I1229 06:57:39.966364 18214 system_pods.go:89] "kube-controller-manager-functional-563786" [c6897f81-6b5f-4857-b756-f41e6285d56e] Running
I1229 06:57:39.966368 18214 system_pods.go:89] "kube-proxy-p249l" [1e1b3654-47a0-4d76-b9e1-406a1865af8d] Running
I1229 06:57:39.966372 18214 system_pods.go:89] "kube-scheduler-functional-563786" [b1e5de45-df8f-4b62-9c7a-0bc10fd574f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1229 06:57:39.966376 18214 system_pods.go:89] "storage-provisioner" [ecdc8113-cbc5-4e5d-99ca-8beaca3cf1ff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1229 06:57:39.966401 18214 retry.go:84] will retry after 200ms: missing components: kube-apiserver
I1229 06:57:40.089112 18214 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1229 06:57:40.094799 18214 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1229 06:57:40.199916 18214 system_pods.go:86] 7 kube-system pods found
I1229 06:57:40.199942 18214 system_pods.go:89] "coredns-7d764666f9-xhjq7" [6219650d-fd31-477c-9dda-a2cef0d5268c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1229 06:57:40.199950 18214 system_pods.go:89] "etcd-functional-563786" [0d0e35e8-e307-48af-9347-9989481710c2] Running
I1229 06:57:40.199959 18214 system_pods.go:89] "kube-apiserver-functional-563786" [5d5f8ff9-7af1-4f2f-a659-8dd003d4b9f0] Pending
I1229 06:57:40.199964 18214 system_pods.go:89] "kube-controller-manager-functional-563786" [c6897f81-6b5f-4857-b756-f41e6285d56e] Running
I1229 06:57:40.199968 18214 system_pods.go:89] "kube-proxy-p249l" [1e1b3654-47a0-4d76-b9e1-406a1865af8d] Running
I1229 06:57:40.199975 18214 system_pods.go:89] "kube-scheduler-functional-563786" [b1e5de45-df8f-4b62-9c7a-0bc10fd574f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1229 06:57:40.199983 18214 system_pods.go:89] "storage-provisioner" [ecdc8113-cbc5-4e5d-99ca-8beaca3cf1ff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1229 06:57:40.569483 18214 system_pods.go:86] 7 kube-system pods found
I1229 06:57:40.569498 18214 system_pods.go:89] "coredns-7d764666f9-xhjq7" [6219650d-fd31-477c-9dda-a2cef0d5268c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1229 06:57:40.569503 18214 system_pods.go:89] "etcd-functional-563786" [0d0e35e8-e307-48af-9347-9989481710c2] Running
I1229 06:57:40.569509 18214 system_pods.go:89] "kube-apiserver-functional-563786" [5d5f8ff9-7af1-4f2f-a659-8dd003d4b9f0] Pending
I1229 06:57:40.569512 18214 system_pods.go:89] "kube-controller-manager-functional-563786" [c6897f81-6b5f-4857-b756-f41e6285d56e] Running
I1229 06:57:40.569514 18214 system_pods.go:89] "kube-proxy-p249l" [1e1b3654-47a0-4d76-b9e1-406a1865af8d] Running
I1229 06:57:40.569518 18214 system_pods.go:89] "kube-scheduler-functional-563786" [b1e5de45-df8f-4b62-9c7a-0bc10fd574f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1229 06:57:40.569522 18214 system_pods.go:89] "storage-provisioner" [ecdc8113-cbc5-4e5d-99ca-8beaca3cf1ff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1229 06:57:40.805966 18214 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
I1229 06:57:40.806895 18214 addons.go:530] duration metric: took 1.112512635s for enable addons: enabled=[default-storageclass storage-provisioner]
I1229 06:57:40.984191 18214 system_pods.go:86] 7 kube-system pods found
I1229 06:57:40.984215 18214 system_pods.go:89] "coredns-7d764666f9-xhjq7" [6219650d-fd31-477c-9dda-a2cef0d5268c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1229 06:57:40.984222 18214 system_pods.go:89] "etcd-functional-563786" [0d0e35e8-e307-48af-9347-9989481710c2] Running
I1229 06:57:40.984227 18214 system_pods.go:89] "kube-apiserver-functional-563786" [5d5f8ff9-7af1-4f2f-a659-8dd003d4b9f0] Pending
I1229 06:57:40.984231 18214 system_pods.go:89] "kube-controller-manager-functional-563786" [c6897f81-6b5f-4857-b756-f41e6285d56e] Running
I1229 06:57:40.984234 18214 system_pods.go:89] "kube-proxy-p249l" [1e1b3654-47a0-4d76-b9e1-406a1865af8d] Running
I1229 06:57:40.984241 18214 system_pods.go:89] "kube-scheduler-functional-563786" [b1e5de45-df8f-4b62-9c7a-0bc10fd574f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1229 06:57:40.984263 18214 system_pods.go:89] "storage-provisioner" [ecdc8113-cbc5-4e5d-99ca-8beaca3cf1ff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1229 06:57:41.391035 18214 system_pods.go:86] 7 kube-system pods found
I1229 06:57:41.391063 18214 system_pods.go:89] "coredns-7d764666f9-xhjq7" [6219650d-fd31-477c-9dda-a2cef0d5268c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1229 06:57:41.391072 18214 system_pods.go:89] "etcd-functional-563786" [0d0e35e8-e307-48af-9347-9989481710c2] Running
I1229 06:57:41.391077 18214 system_pods.go:89] "kube-apiserver-functional-563786" [5d5f8ff9-7af1-4f2f-a659-8dd003d4b9f0] Pending
I1229 06:57:41.391081 18214 system_pods.go:89] "kube-controller-manager-functional-563786" [c6897f81-6b5f-4857-b756-f41e6285d56e] Running
I1229 06:57:41.391085 18214 system_pods.go:89] "kube-proxy-p249l" [1e1b3654-47a0-4d76-b9e1-406a1865af8d] Running
I1229 06:57:41.391092 18214 system_pods.go:89] "kube-scheduler-functional-563786" [b1e5de45-df8f-4b62-9c7a-0bc10fd574f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1229 06:57:41.391100 18214 system_pods.go:89] "storage-provisioner" [ecdc8113-cbc5-4e5d-99ca-8beaca3cf1ff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1229 06:57:41.861420 18214 system_pods.go:86] 7 kube-system pods found
I1229 06:57:41.861436 18214 system_pods.go:89] "coredns-7d764666f9-xhjq7" [6219650d-fd31-477c-9dda-a2cef0d5268c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1229 06:57:41.861441 18214 system_pods.go:89] "etcd-functional-563786" [0d0e35e8-e307-48af-9347-9989481710c2] Running
I1229 06:57:41.861445 18214 system_pods.go:89] "kube-apiserver-functional-563786" [5d5f8ff9-7af1-4f2f-a659-8dd003d4b9f0] Pending
I1229 06:57:41.861448 18214 system_pods.go:89] "kube-controller-manager-functional-563786" [c6897f81-6b5f-4857-b756-f41e6285d56e] Running
I1229 06:57:41.861450 18214 system_pods.go:89] "kube-proxy-p249l" [1e1b3654-47a0-4d76-b9e1-406a1865af8d] Running
I1229 06:57:41.861455 18214 system_pods.go:89] "kube-scheduler-functional-563786" [b1e5de45-df8f-4b62-9c7a-0bc10fd574f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1229 06:57:41.861459 18214 system_pods.go:89] "storage-provisioner" [ecdc8113-cbc5-4e5d-99ca-8beaca3cf1ff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1229 06:57:42.681770 18214 system_pods.go:86] 7 kube-system pods found
I1229 06:57:42.681789 18214 system_pods.go:89] "coredns-7d764666f9-xhjq7" [6219650d-fd31-477c-9dda-a2cef0d5268c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1229 06:57:42.681793 18214 system_pods.go:89] "etcd-functional-563786" [0d0e35e8-e307-48af-9347-9989481710c2] Running
I1229 06:57:42.681796 18214 system_pods.go:89] "kube-apiserver-functional-563786" [5d5f8ff9-7af1-4f2f-a659-8dd003d4b9f0] Pending
I1229 06:57:42.681798 18214 system_pods.go:89] "kube-controller-manager-functional-563786" [c6897f81-6b5f-4857-b756-f41e6285d56e] Running
I1229 06:57:42.681801 18214 system_pods.go:89] "kube-proxy-p249l" [1e1b3654-47a0-4d76-b9e1-406a1865af8d] Running
I1229 06:57:42.681804 18214 system_pods.go:89] "kube-scheduler-functional-563786" [b1e5de45-df8f-4b62-9c7a-0bc10fd574f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1229 06:57:42.681808 18214 system_pods.go:89] "storage-provisioner" [ecdc8113-cbc5-4e5d-99ca-8beaca3cf1ff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1229 06:57:43.854471 18214 system_pods.go:86] 7 kube-system pods found
I1229 06:57:43.854488 18214 system_pods.go:89] "coredns-7d764666f9-xhjq7" [6219650d-fd31-477c-9dda-a2cef0d5268c] Running
I1229 06:57:43.854492 18214 system_pods.go:89] "etcd-functional-563786" [0d0e35e8-e307-48af-9347-9989481710c2] Running
I1229 06:57:43.854497 18214 system_pods.go:89] "kube-apiserver-functional-563786" [5d5f8ff9-7af1-4f2f-a659-8dd003d4b9f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1229 06:57:43.854503 18214 system_pods.go:89] "kube-controller-manager-functional-563786" [c6897f81-6b5f-4857-b756-f41e6285d56e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1229 06:57:43.854506 18214 system_pods.go:89] "kube-proxy-p249l" [1e1b3654-47a0-4d76-b9e1-406a1865af8d] Running
I1229 06:57:43.854510 18214 system_pods.go:89] "kube-scheduler-functional-563786" [b1e5de45-df8f-4b62-9c7a-0bc10fd574f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1229 06:57:43.854514 18214 system_pods.go:89] "storage-provisioner" [ecdc8113-cbc5-4e5d-99ca-8beaca3cf1ff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1229 06:57:43.854520 18214 system_pods.go:126] duration metric: took 3.890788563s to wait for k8s-apps to be running ...
I1229 06:57:43.854526 18214 system_svc.go:44] waiting for kubelet service to be running ....
I1229 06:57:43.854565 18214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1229 06:57:43.870581 18214 system_svc.go:56] duration metric: took 16.043671ms WaitForService to wait for kubelet
I1229 06:57:43.870595 18214 kubeadm.go:587] duration metric: took 4.176288413s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1229 06:57:43.870609 18214 node_conditions.go:102] verifying NodePressure condition ...
I1229 06:57:43.873451 18214 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1229 06:57:43.873460 18214 node_conditions.go:123] node cpu capacity is 2
I1229 06:57:43.873467 18214 node_conditions.go:105] duration metric: took 2.85531ms to run NodePressure ...
I1229 06:57:43.873476 18214 start.go:242] waiting for startup goroutines ...
I1229 06:57:43.873481 18214 start.go:247] waiting for cluster config update ...
I1229 06:57:43.873491 18214 start.go:256] writing updated cluster config ...
I1229 06:57:43.873746 18214 ssh_runner.go:195] Run: rm -f paused
I1229 06:57:43.878513 18214 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1229 06:57:43.881750 18214 pod_ready.go:83] waiting for pod "coredns-7d764666f9-xhjq7" in "kube-system" namespace to be "Ready" or be gone ...
I1229 06:57:43.886676 18214 pod_ready.go:94] pod "coredns-7d764666f9-xhjq7" is "Ready"
I1229 06:57:43.886685 18214 pod_ready.go:86] duration metric: took 4.926515ms for pod "coredns-7d764666f9-xhjq7" in "kube-system" namespace to be "Ready" or be gone ...
I1229 06:57:43.889252 18214 pod_ready.go:83] waiting for pod "etcd-functional-563786" in "kube-system" namespace to be "Ready" or be gone ...
I1229 06:57:43.893236 18214 pod_ready.go:94] pod "etcd-functional-563786" is "Ready"
I1229 06:57:43.893244 18214 pod_ready.go:86] duration metric: took 3.985126ms for pod "etcd-functional-563786" in "kube-system" namespace to be "Ready" or be gone ...
I1229 06:57:43.895042 18214 pod_ready.go:83] waiting for pod "kube-apiserver-functional-563786" in "kube-system" namespace to be "Ready" or be gone ...
W1229 06:57:45.900442 18214 pod_ready.go:104] pod "kube-apiserver-functional-563786" is not "Ready", error: <nil>
I1229 06:57:47.402195 18214 pod_ready.go:94] pod "kube-apiserver-functional-563786" is "Ready"
I1229 06:57:47.402208 18214 pod_ready.go:86] duration metric: took 3.507158908s for pod "kube-apiserver-functional-563786" in "kube-system" namespace to be "Ready" or be gone ...
I1229 06:57:47.404363 18214 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-563786" in "kube-system" namespace to be "Ready" or be gone ...
W1229 06:57:49.409703 18214 pod_ready.go:104] pod "kube-controller-manager-functional-563786" is not "Ready", error: <nil>
W1229 06:57:51.412052 18214 pod_ready.go:104] pod "kube-controller-manager-functional-563786" is not "Ready", error: <nil>
I1229 06:57:53.909925 18214 pod_ready.go:94] pod "kube-controller-manager-functional-563786" is "Ready"
I1229 06:57:53.909944 18214 pod_ready.go:86] duration metric: took 6.505570155s for pod "kube-controller-manager-functional-563786" in "kube-system" namespace to be "Ready" or be gone ...
I1229 06:57:53.912470 18214 pod_ready.go:83] waiting for pod "kube-proxy-p249l" in "kube-system" namespace to be "Ready" or be gone ...
I1229 06:57:53.917051 18214 pod_ready.go:94] pod "kube-proxy-p249l" is "Ready"
I1229 06:57:53.917062 18214 pod_ready.go:86] duration metric: took 4.57977ms for pod "kube-proxy-p249l" in "kube-system" namespace to be "Ready" or be gone ...
I1229 06:57:53.918969 18214 pod_ready.go:83] waiting for pod "kube-scheduler-functional-563786" in "kube-system" namespace to be "Ready" or be gone ...
I1229 06:57:53.923062 18214 pod_ready.go:94] pod "kube-scheduler-functional-563786" is "Ready"
I1229 06:57:53.923073 18214 pod_ready.go:86] duration metric: took 4.091719ms for pod "kube-scheduler-functional-563786" in "kube-system" namespace to be "Ready" or be gone ...
I1229 06:57:53.923085 18214 pod_ready.go:40] duration metric: took 10.044555857s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1229 06:57:53.963912 18214 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
I1229 06:57:53.965721 18214 out.go:179] * Done! kubectl is now configured to use "functional-563786" cluster and "default" namespace by default
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
dbe401d00eb2e 6e38f40d628db 2 seconds ago Running storage-provisioner 4 1d08901b3819b storage-provisioner kube-system
dc953c7a44d44 6e38f40d628db 16 seconds ago Exited storage-provisioner 3 1d08901b3819b storage-provisioner kube-system
0aba86d287d1e aa5e3ebc0dfed 16 seconds ago Running coredns 2 1a5b4e995ef06 coredns-7d764666f9-xhjq7 kube-system
63420c44a518e 5c6acd67e9cd1 18 seconds ago Running kube-apiserver 0 f5ec55f98f7ff kube-apiserver-functional-563786 kube-system
7db12b847d068 550794e3b12ac 19 seconds ago Running kube-scheduler 2 787a7be6766e8 kube-scheduler-functional-563786 kube-system
302099eedbb6f 0a108f7189562 24 seconds ago Running etcd 2 43ff17dde9248 etcd-functional-563786 kube-system
bb92f9e83c4ee 32652ff1bbe6b 24 seconds ago Running kube-proxy 2 4bbfba7f7071d kube-proxy-p249l kube-system
97d32df1659ef 2c9a4b058bd7e 25 seconds ago Running kube-controller-manager 3 aa950e01fc809 kube-controller-manager-functional-563786 kube-system
40b10d9048e9a 2c9a4b058bd7e About a minute ago Exited kube-controller-manager 2 aa950e01fc809 kube-controller-manager-functional-563786 kube-system
dc7bfd1dafa9a 0a108f7189562 About a minute ago Exited etcd 1 43ff17dde9248 etcd-functional-563786 kube-system
3e35205395ee2 32652ff1bbe6b About a minute ago Exited kube-proxy 1 4bbfba7f7071d kube-proxy-p249l kube-system
dab808763457f aa5e3ebc0dfed About a minute ago Exited coredns 1 1a5b4e995ef06 coredns-7d764666f9-xhjq7 kube-system
61ddb0d9c54e8 550794e3b12ac About a minute ago Exited kube-scheduler 1 787a7be6766e8 kube-scheduler-functional-563786 kube-system
==> containerd <==
Dec 29 06:57:38 functional-563786 containerd[4453]: time="2025-12-29T06:57:38.552577435Z" level=info msg="CreateContainer within sandbox \"1d08901b3819b89e4a1d25b543740d603365466ffd74653fa5be74d198ebcb94\" for name:\"storage-provisioner\" attempt:3 returns container id \"dc953c7a44d447575b823fb9cf08af135544a3d9065c85afc282c58d3d031b69\""
Dec 29 06:57:38 functional-563786 containerd[4453]: time="2025-12-29T06:57:38.554245553Z" level=info msg="StartContainer for \"dc953c7a44d447575b823fb9cf08af135544a3d9065c85afc282c58d3d031b69\""
Dec 29 06:57:38 functional-563786 containerd[4453]: time="2025-12-29T06:57:38.555012235Z" level=info msg="connecting to shim dc953c7a44d447575b823fb9cf08af135544a3d9065c85afc282c58d3d031b69" address="unix:///run/containerd/s/62a0ecb7785014ddd223db3cc5d1bcbfb8507e31d8092cbed1d431d47d811992" protocol=ttrpc version=3
Dec 29 06:57:38 functional-563786 containerd[4453]: time="2025-12-29T06:57:38.666996245Z" level=info msg="StartContainer for \"0aba86d287d1e16457609b17bfcd39b37ed13213679aa86aad551751fb3d4f92\" returns successfully"
Dec 29 06:57:38 functional-563786 containerd[4453]: time="2025-12-29T06:57:38.766099384Z" level=info msg="StartContainer for \"dc953c7a44d447575b823fb9cf08af135544a3d9065c85afc282c58d3d031b69\" returns successfully"
Dec 29 06:57:38 functional-563786 containerd[4453]: time="2025-12-29T06:57:38.792215341Z" level=info msg="received container exit event container_id:\"dc953c7a44d447575b823fb9cf08af135544a3d9065c85afc282c58d3d031b69\" id:\"dc953c7a44d447575b823fb9cf08af135544a3d9065c85afc282c58d3d031b69\" pid:5448 exit_status:1 exited_at:{seconds:1766991458 nanos:791943255}"
Dec 29 06:57:39 functional-563786 containerd[4453]: time="2025-12-29T06:57:39.242319324Z" level=info msg="StopPodSandbox for \"10fbb4f71ebd45ab780354627aef995a4bbbdb1f43bcb9f4920c50aa257dfe23\""
Dec 29 06:57:39 functional-563786 containerd[4453]: time="2025-12-29T06:57:39.242408768Z" level=info msg="Container to stop \"e2ce86e6a7ef4c9f2233f28f1d638cc870bd3ca2322c249a9499dce7c70eae60\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 29 06:57:39 functional-563786 containerd[4453]: time="2025-12-29T06:57:39.266918683Z" level=info msg="received sandbox exit event container_id:\"10fbb4f71ebd45ab780354627aef995a4bbbdb1f43bcb9f4920c50aa257dfe23\" id:\"10fbb4f71ebd45ab780354627aef995a4bbbdb1f43bcb9f4920c50aa257dfe23\" exit_status:137 exited_at:{seconds:1766991459 nanos:264155552}" monitor_name=podsandbox
Dec 29 06:57:39 functional-563786 containerd[4453]: time="2025-12-29T06:57:39.336359128Z" level=info msg="RemoveContainer for \"b0193dc8381c52eb45d762af1c0fc7e3db097def7d35c819d7215d13fdf8af76\""
Dec 29 06:57:39 functional-563786 containerd[4453]: time="2025-12-29T06:57:39.345198493Z" level=info msg="shim disconnected" id=10fbb4f71ebd45ab780354627aef995a4bbbdb1f43bcb9f4920c50aa257dfe23 namespace=k8s.io
Dec 29 06:57:39 functional-563786 containerd[4453]: time="2025-12-29T06:57:39.345722692Z" level=info msg="cleaning up after shim disconnected" id=10fbb4f71ebd45ab780354627aef995a4bbbdb1f43bcb9f4920c50aa257dfe23 namespace=k8s.io
Dec 29 06:57:39 functional-563786 containerd[4453]: time="2025-12-29T06:57:39.348144002Z" level=info msg="cleaning up dead shim" id=10fbb4f71ebd45ab780354627aef995a4bbbdb1f43bcb9f4920c50aa257dfe23 namespace=k8s.io
Dec 29 06:57:39 functional-563786 containerd[4453]: time="2025-12-29T06:57:39.361604013Z" level=info msg="RemoveContainer for \"b0193dc8381c52eb45d762af1c0fc7e3db097def7d35c819d7215d13fdf8af76\" returns successfully"
Dec 29 06:57:39 functional-563786 containerd[4453]: time="2025-12-29T06:57:39.379470657Z" level=info msg="received sandbox container exit event sandbox_id:\"10fbb4f71ebd45ab780354627aef995a4bbbdb1f43bcb9f4920c50aa257dfe23\" exit_status:137 exited_at:{seconds:1766991459 nanos:264155552}" monitor_name=criService
Dec 29 06:57:39 functional-563786 containerd[4453]: time="2025-12-29T06:57:39.380902906Z" level=info msg="TearDown network for sandbox \"10fbb4f71ebd45ab780354627aef995a4bbbdb1f43bcb9f4920c50aa257dfe23\" successfully"
Dec 29 06:57:39 functional-563786 containerd[4453]: time="2025-12-29T06:57:39.380964423Z" level=info msg="StopPodSandbox for \"10fbb4f71ebd45ab780354627aef995a4bbbdb1f43bcb9f4920c50aa257dfe23\" returns successfully"
Dec 29 06:57:40 functional-563786 containerd[4453]: time="2025-12-29T06:57:40.363530504Z" level=info msg="RemoveContainer for \"e2ce86e6a7ef4c9f2233f28f1d638cc870bd3ca2322c249a9499dce7c70eae60\""
Dec 29 06:57:40 functional-563786 containerd[4453]: time="2025-12-29T06:57:40.381713372Z" level=info msg="RemoveContainer for \"e2ce86e6a7ef4c9f2233f28f1d638cc870bd3ca2322c249a9499dce7c70eae60\" returns successfully"
Dec 29 06:57:52 functional-563786 containerd[4453]: time="2025-12-29T06:57:52.242674300Z" level=info msg="CreateContainer within sandbox \"1d08901b3819b89e4a1d25b543740d603365466ffd74653fa5be74d198ebcb94\" for container name:\"storage-provisioner\" attempt:4"
Dec 29 06:57:52 functional-563786 containerd[4453]: time="2025-12-29T06:57:52.258541020Z" level=info msg="Container dbe401d00eb2e1e789c079515ee2ffe4a93cc68036cfb9c92056aabb8a131a7e: CDI devices from CRI Config.CDIDevices: []"
Dec 29 06:57:52 functional-563786 containerd[4453]: time="2025-12-29T06:57:52.269497911Z" level=info msg="CreateContainer within sandbox \"1d08901b3819b89e4a1d25b543740d603365466ffd74653fa5be74d198ebcb94\" for name:\"storage-provisioner\" attempt:4 returns container id \"dbe401d00eb2e1e789c079515ee2ffe4a93cc68036cfb9c92056aabb8a131a7e\""
Dec 29 06:57:52 functional-563786 containerd[4453]: time="2025-12-29T06:57:52.271225645Z" level=info msg="StartContainer for \"dbe401d00eb2e1e789c079515ee2ffe4a93cc68036cfb9c92056aabb8a131a7e\""
Dec 29 06:57:52 functional-563786 containerd[4453]: time="2025-12-29T06:57:52.273745259Z" level=info msg="connecting to shim dbe401d00eb2e1e789c079515ee2ffe4a93cc68036cfb9c92056aabb8a131a7e" address="unix:///run/containerd/s/62a0ecb7785014ddd223db3cc5d1bcbfb8507e31d8092cbed1d431d47d811992" protocol=ttrpc version=3
Dec 29 06:57:52 functional-563786 containerd[4453]: time="2025-12-29T06:57:52.346145503Z" level=info msg="StartContainer for \"dbe401d00eb2e1e789c079515ee2ffe4a93cc68036cfb9c92056aabb8a131a7e\" returns successfully"
==> coredns [0aba86d287d1e16457609b17bfcd39b37ed13213679aa86aad551751fb3d4f92] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
[ERROR] plugin/kubernetes: Failed to watch
[ERROR] plugin/kubernetes: Failed to watch
[ERROR] plugin/kubernetes: Failed to watch
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
.:53
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
CoreDNS-1.13.1
linux/amd64, go1.25.2, 1db4568
[INFO] 127.0.0.1:33884 - 19719 "HINFO IN 1238636623963050715.9122624180791411025. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016896391s
==> coredns [dab808763457f9111fd3c2dc04e428e5a9b222cab0172e8c82c685c135a8cc06] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
CoreDNS-1.13.1
linux/amd64, go1.25.2, 1db4568
[INFO] 127.0.0.1:52680 - 41580 "HINFO IN 328533058487738811.5722422931737120336. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.015623521s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: functional-563786
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-563786
kubernetes.io/os=linux
minikube.k8s.io/commit=306c26d738eaa3534a776d7c684ed563998a48b8
minikube.k8s.io/name=functional-563786
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_29T06_55_38_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 29 Dec 2025 06:55:34 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-563786
AcquireTime: <unset>
RenewTime: Mon, 29 Dec 2025 06:57:47 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 29 Dec 2025 06:57:37 +0000 Mon, 29 Dec 2025 06:55:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 29 Dec 2025 06:57:37 +0000 Mon, 29 Dec 2025 06:55:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 29 Dec 2025 06:57:37 +0000 Mon, 29 Dec 2025 06:55:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 29 Dec 2025 06:57:37 +0000 Mon, 29 Dec 2025 06:55:38 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.101
Hostname: functional-563786
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
System Info:
Machine ID: efb54d40f503442d8114f762499406f5
System UUID: efb54d40-f503-442d-8114-f762499406f5
Boot ID: c7ea999a-233f-4537-b9b4-ed5c6b149044
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://2.2.1
Kubelet Version: v1.35.0
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-7d764666f9-xhjq7 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 2m12s
kube-system etcd-functional-563786 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 2m18s
kube-system kube-apiserver-functional-563786 250m (12%) 0 (0%) 0 (0%) 0 (0%) 15s
kube-system kube-controller-manager-functional-563786 200m (10%) 0 (0%) 0 (0%) 0 (0%) 2m17s
kube-system kube-proxy-p249l 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m12s
kube-system kube-scheduler-functional-563786 100m (5%) 0 (0%) 0 (0%) 0 (0%) 2m17s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m10s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%) 0 (0%)
memory 170Mi (4%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal RegisteredNode 2m14s node-controller Node functional-563786 event: Registered Node functional-563786 in Controller
Normal RegisteredNode 59s node-controller Node functional-563786 event: Registered Node functional-563786 in Controller
Normal RegisteredNode 14s node-controller Node functional-563786 event: Registered Node functional-563786 in Controller
==> dmesg <==
[Dec29 06:55] Booted with the nomodeset parameter. Only the system framebuffer will be available
[ +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
[ +0.000082] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +0.004186] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
[ +0.182767] crun[404]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
[ +0.988004] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +0.088870] kauditd_printk_skb: 4 callbacks suppressed
[ +0.092455] kauditd_printk_skb: 130 callbacks suppressed
[ +0.142300] kauditd_printk_skb: 224 callbacks suppressed
[ +0.000029] kauditd_printk_skb: 18 callbacks suppressed
[ +6.211230] kauditd_printk_skb: 300 callbacks suppressed
[Dec29 06:56] kauditd_printk_skb: 47 callbacks suppressed
[ +0.908549] kauditd_printk_skb: 84 callbacks suppressed
[ +5.034128] kauditd_printk_skb: 38 callbacks suppressed
[ +7.150484] kauditd_printk_skb: 86 callbacks suppressed
[ +9.422702] kauditd_printk_skb: 24 callbacks suppressed
[Dec29 06:57] kauditd_printk_skb: 70 callbacks suppressed
[ +0.125509] kauditd_printk_skb: 12 callbacks suppressed
[ +10.966466] kauditd_printk_skb: 110 callbacks suppressed
[ +5.062706] kauditd_printk_skb: 88 callbacks suppressed
[ +4.191239] kauditd_printk_skb: 144 callbacks suppressed
[ +8.891045] kauditd_printk_skb: 4 callbacks suppressed
==> etcd [302099eedbb6f0dc4e582744ba0b29ddd304d3575c870c40f52c064c2829a751] <==
{"level":"info","ts":"2025-12-29T06:57:30.224141Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"24cb6133d13a326a","local-member-id":"65e271b8f7cb8d0f","added-peer-id":"65e271b8f7cb8d0f","added-peer-peer-urls":["https://192.168.39.101:2380"],"added-peer-is-learner":false}
{"level":"info","ts":"2025-12-29T06:57:30.224316Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"24cb6133d13a326a","local-member-id":"65e271b8f7cb8d0f","from":"3.6","to":"3.6"}
{"level":"info","ts":"2025-12-29T06:57:30.225838Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
{"level":"info","ts":"2025-12-29T06:57:30.225920Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2025-12-29T06:57:30.225936Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2025-12-29T06:57:30.226198Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.101:2380"}
{"level":"info","ts":"2025-12-29T06:57:30.226231Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.101:2380"}
{"level":"info","ts":"2025-12-29T06:57:30.893943Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"65e271b8f7cb8d0f is starting a new election at term 3"}
{"level":"info","ts":"2025-12-29T06:57:30.893994Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"65e271b8f7cb8d0f became pre-candidate at term 3"}
{"level":"info","ts":"2025-12-29T06:57:30.894047Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"65e271b8f7cb8d0f received MsgPreVoteResp from 65e271b8f7cb8d0f at term 3"}
{"level":"info","ts":"2025-12-29T06:57:30.894060Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"65e271b8f7cb8d0f has received 1 MsgPreVoteResp votes and 0 vote rejections"}
{"level":"info","ts":"2025-12-29T06:57:30.894085Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"65e271b8f7cb8d0f became candidate at term 4"}
{"level":"info","ts":"2025-12-29T06:57:30.898896Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"65e271b8f7cb8d0f received MsgVoteResp from 65e271b8f7cb8d0f at term 4"}
{"level":"info","ts":"2025-12-29T06:57:30.899047Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"65e271b8f7cb8d0f has received 1 MsgVoteResp votes and 0 vote rejections"}
{"level":"info","ts":"2025-12-29T06:57:30.899364Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"65e271b8f7cb8d0f became leader at term 4"}
{"level":"info","ts":"2025-12-29T06:57:30.899382Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 65e271b8f7cb8d0f elected leader 65e271b8f7cb8d0f at term 4"}
{"level":"info","ts":"2025-12-29T06:57:30.901508Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"65e271b8f7cb8d0f","local-member-attributes":"{Name:functional-563786 ClientURLs:[https://192.168.39.101:2379]}","cluster-id":"24cb6133d13a326a","publish-timeout":"7s"}
{"level":"info","ts":"2025-12-29T06:57:30.901510Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-12-29T06:57:30.901546Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-12-29T06:57:30.903176Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-12-29T06:57:30.904966Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-12-29T06:57:30.901683Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2025-12-29T06:57:30.905069Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2025-12-29T06:57:30.908241Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2025-12-29T06:57:30.909175Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.101:2379"}
==> etcd [dc7bfd1dafa9a77ece7526241359133444f7c9beb41bf6ea53bcee94d8e59235] <==
{"level":"info","ts":"2025-12-29T06:56:40.274092Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2025-12-29T06:56:40.274164Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2025-12-29T06:56:40.274310Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-12-29T06:56:40.275539Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-12-29T06:56:40.275926Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-12-29T06:56:40.278337Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2025-12-29T06:56:40.278658Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.101:2379"}
{"level":"info","ts":"2025-12-29T06:57:29.050605Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2025-12-29T06:57:29.051141Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-563786","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.101:2380"],"advertise-client-urls":["https://192.168.39.101:2379"]}
{"level":"error","ts":"2025-12-29T06:57:29.051322Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-12-29T06:57:29.052684Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-12-29T06:57:29.052720Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-12-29T06:57:29.052745Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"65e271b8f7cb8d0f","current-leader-member-id":"65e271b8f7cb8d0f"}
{"level":"info","ts":"2025-12-29T06:57:29.052841Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
{"level":"info","ts":"2025-12-29T06:57:29.052852Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
{"level":"warn","ts":"2025-12-29T06:57:29.052984Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.101:2379: use of closed network connection"}
{"level":"warn","ts":"2025-12-29T06:57:29.053084Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.101:2379: use of closed network connection"}
{"level":"error","ts":"2025-12-29T06:57:29.053288Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.101:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"warn","ts":"2025-12-29T06:57:29.053334Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-12-29T06:57:29.053343Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"error","ts":"2025-12-29T06:57:29.053415Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-12-29T06:57:29.056227Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.101:2380"}
{"level":"error","ts":"2025-12-29T06:57:29.056425Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.101:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-12-29T06:57:29.056529Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.101:2380"}
{"level":"info","ts":"2025-12-29T06:57:29.056613Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-563786","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.101:2380"],"advertise-client-urls":["https://192.168.39.101:2379"]}
==> kernel <==
06:57:55 up 2 min, 0 users, load average: 1.14, 0.52, 0.20
Linux functional-563786 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec 29 06:17:23 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [63420c44a518e81f465535e30dee73e4db1dc7c6f8ed8207b6260618ca41abd7] <==
I1229 06:57:37.574665 1 cache.go:39] Caches are synced for LocalAvailability controller
I1229 06:57:37.575580 1 apf_controller.go:382] Running API Priority and Fairness config worker
I1229 06:57:37.575634 1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
I1229 06:57:37.575722 1 shared_informer.go:377] "Caches are synced"
I1229 06:57:37.576092 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1229 06:57:37.576738 1 shared_informer.go:377] "Caches are synced"
I1229 06:57:37.577282 1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
I1229 06:57:37.578060 1 aggregator.go:187] initial CRD sync complete...
I1229 06:57:37.578174 1 autoregister_controller.go:144] Starting autoregister controller
I1229 06:57:37.578200 1 cache.go:32] Waiting for caches to sync for autoregister controller
I1229 06:57:37.578206 1 cache.go:39] Caches are synced for autoregister controller
I1229 06:57:37.579358 1 cache.go:39] Caches are synced for RemoteAvailability controller
I1229 06:57:37.581323 1 handler_discovery.go:451] Starting ResourceDiscoveryManager
I1229 06:57:37.593712 1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
I1229 06:57:37.601578 1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
I1229 06:57:38.266689 1 controller.go:667] quota admission added evaluator for: serviceaccounts
I1229 06:57:38.379703 1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
W1229 06:57:38.890886 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.101]
I1229 06:57:38.892747 1 controller.go:667] quota admission added evaluator for: endpoints
I1229 06:57:38.898172 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1229 06:57:39.508707 1 controller.go:667] quota admission added evaluator for: deployments.apps
I1229 06:57:39.548282 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I1229 06:57:39.576212 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1229 06:57:39.583592 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1229 06:57:40.782950 1 controller.go:667] quota admission added evaluator for: replicasets.apps
==> kube-controller-manager [40b10d9048e9ac31e9700a8a80054e2bcb69946e913e151aace1fb05fd74cbe4] <==
I1229 06:56:55.138313 1 shared_informer.go:377] "Caches are synced"
I1229 06:56:55.138384 1 shared_informer.go:377] "Caches are synced"
I1229 06:56:55.138426 1 shared_informer.go:377] "Caches are synced"
I1229 06:56:55.138618 1 shared_informer.go:377] "Caches are synced"
I1229 06:56:55.138685 1 shared_informer.go:377] "Caches are synced"
I1229 06:56:55.138715 1 shared_informer.go:377] "Caches are synced"
I1229 06:56:55.138915 1 shared_informer.go:377] "Caches are synced"
I1229 06:56:55.138969 1 shared_informer.go:377] "Caches are synced"
I1229 06:56:55.138999 1 shared_informer.go:377] "Caches are synced"
I1229 06:56:55.140271 1 shared_informer.go:377] "Caches are synced"
I1229 06:56:55.140527 1 shared_informer.go:377] "Caches are synced"
I1229 06:56:55.140927 1 shared_informer.go:377] "Caches are synced"
I1229 06:56:55.141073 1 shared_informer.go:377] "Caches are synced"
I1229 06:56:55.141191 1 shared_informer.go:377] "Caches are synced"
I1229 06:56:55.141343 1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
I1229 06:56:55.141484 1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-563786"
I1229 06:56:55.141585 1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
I1229 06:56:55.143181 1 shared_informer.go:377] "Caches are synced"
I1229 06:56:55.144734 1 shared_informer.go:377] "Caches are synced"
I1229 06:56:55.145625 1 shared_informer.go:377] "Caches are synced"
I1229 06:56:55.148912 1 shared_informer.go:370] "Waiting for caches to sync"
I1229 06:56:55.222733 1 shared_informer.go:377] "Caches are synced"
I1229 06:56:55.222747 1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
I1229 06:56:55.222751 1 garbagecollector.go:169] "Proceeding to collect garbage"
I1229 06:56:55.249748 1 shared_informer.go:377] "Caches are synced"
==> kube-controller-manager [97d32df1659ef8a9433bf16d975b6e03ec24cc8367292dd750ab773a6fc5a90c] <==
I1229 06:57:40.460316 1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-563786"
I1229 06:57:40.460439 1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
I1229 06:57:40.460475 1 shared_informer.go:377] "Caches are synced"
I1229 06:57:40.460557 1 shared_informer.go:377] "Caches are synced"
I1229 06:57:40.460576 1 range_allocator.go:177] "Sending events to api server"
I1229 06:57:40.460661 1 range_allocator.go:181] "Starting range CIDR allocator"
I1229 06:57:40.460667 1 shared_informer.go:370] "Waiting for caches to sync"
I1229 06:57:40.460670 1 shared_informer.go:377] "Caches are synced"
I1229 06:57:40.460832 1 shared_informer.go:377] "Caches are synced"
I1229 06:57:40.460859 1 shared_informer.go:377] "Caches are synced"
I1229 06:57:40.460949 1 shared_informer.go:377] "Caches are synced"
I1229 06:57:40.461021 1 shared_informer.go:377] "Caches are synced"
I1229 06:57:40.462206 1 shared_informer.go:377] "Caches are synced"
I1229 06:57:40.462425 1 shared_informer.go:377] "Caches are synced"
I1229 06:57:40.464250 1 shared_informer.go:377] "Caches are synced"
I1229 06:57:40.464303 1 shared_informer.go:377] "Caches are synced"
I1229 06:57:40.464368 1 shared_informer.go:377] "Caches are synced"
I1229 06:57:40.464411 1 shared_informer.go:377] "Caches are synced"
I1229 06:57:40.464475 1 shared_informer.go:377] "Caches are synced"
I1229 06:57:40.467545 1 shared_informer.go:377] "Caches are synced"
I1229 06:57:40.478142 1 shared_informer.go:377] "Caches are synced"
I1229 06:57:40.541275 1 shared_informer.go:377] "Caches are synced"
I1229 06:57:40.553583 1 shared_informer.go:377] "Caches are synced"
I1229 06:57:40.553598 1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
I1229 06:57:40.553602 1 garbagecollector.go:169] "Proceeding to collect garbage"
==> kube-proxy [3e35205395ee22ddc89be83a1535850e26e1157ae8e1042162c6305c1c7c8549] <==
I1229 06:56:32.836929 1 shared_informer.go:370] "Waiting for caches to sync"
I1229 06:56:41.337988 1 shared_informer.go:377] "Caches are synced"
I1229 06:56:41.338028 1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.101"]
E1229 06:56:41.338085 1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1229 06:56:41.434331 1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1229 06:56:41.434707 1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1229 06:56:41.435019 1 server_linux.go:136] "Using iptables Proxier"
I1229 06:56:41.449246 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1229 06:56:41.450223 1 server.go:529] "Version info" version="v1.35.0"
I1229 06:56:41.450461 1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1229 06:56:41.454171 1 config.go:200] "Starting service config controller"
I1229 06:56:41.456276 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1229 06:56:41.454201 1 config.go:106] "Starting endpoint slice config controller"
I1229 06:56:41.456696 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1229 06:56:41.454274 1 config.go:403] "Starting serviceCIDR config controller"
I1229 06:56:41.456900 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1229 06:56:41.457110 1 config.go:309] "Starting node config controller"
I1229 06:56:41.457281 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1229 06:56:41.457382 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1229 06:56:41.559158 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1229 06:56:41.560624 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1229 06:56:41.657927 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-proxy [bb92f9e83c4eedb0760d3c088e227eab540691d7aca6b6a2fea1c2bf2fa93d74] <==
I1229 06:57:30.217505 1 shared_informer.go:370] "Waiting for caches to sync"
I1229 06:57:38.818088 1 shared_informer.go:377] "Caches are synced"
I1229 06:57:38.818565 1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.101"]
E1229 06:57:38.818843 1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1229 06:57:38.857822 1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1229 06:57:38.857893 1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1229 06:57:38.857916 1 server_linux.go:136] "Using iptables Proxier"
I1229 06:57:38.866974 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1229 06:57:38.867546 1 server.go:529] "Version info" version="v1.35.0"
I1229 06:57:38.867639 1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1229 06:57:38.872666 1 config.go:309] "Starting node config controller"
I1229 06:57:38.872869 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1229 06:57:38.873035 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1229 06:57:38.875861 1 config.go:200] "Starting service config controller"
I1229 06:57:38.875888 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1229 06:57:38.875919 1 config.go:106] "Starting endpoint slice config controller"
I1229 06:57:38.875923 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1229 06:57:38.875932 1 config.go:403] "Starting serviceCIDR config controller"
I1229 06:57:38.875935 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1229 06:57:38.976549 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1229 06:57:38.976623 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1229 06:57:38.977078 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
==> kube-scheduler [61ddb0d9c54e84133c945449c9377bee5e07ca9873a34ad8edb72c3401c91dac] <==
I1229 06:56:41.571424 1 shared_informer.go:370] "Waiting for caches to sync"
I1229 06:56:41.571836 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I1229 06:56:41.580063 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1229 06:56:41.580157 1 shared_informer.go:370] "Waiting for caches to sync"
I1229 06:56:41.580230 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I1229 06:56:41.580271 1 shared_informer.go:370] "Waiting for caches to sync"
I1229 06:56:41.672285 1 shared_informer.go:377] "Caches are synced"
I1229 06:56:41.680664 1 shared_informer.go:377] "Caches are synced"
I1229 06:56:41.680855 1 shared_informer.go:377] "Caches are synced"
E1229 06:56:51.912908 1 reflector.go:204] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
E1229 06:56:51.956699 1 reflector.go:204] "Failed to watch" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
E1229 06:56:51.957260 1 reflector.go:204] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
E1229 06:56:51.957351 1 reflector.go:204] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
E1229 06:56:51.957485 1 reflector.go:204] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
E1229 06:56:51.957491 1 reflector.go:204] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
E1229 06:56:51.957548 1 reflector.go:204] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
E1229 06:56:51.959263 1 reflector.go:204] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
I1229 06:57:34.261176 1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
I1229 06:57:34.261441 1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
I1229 06:57:34.261452 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I1229 06:57:34.261488 1 server.go:263] "[graceful-termination] secure server has stopped listening"
I1229 06:57:34.261571 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1229 06:57:34.261591 1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
I1229 06:57:34.261653 1 server.go:265] "[graceful-termination] secure server is exiting"
E1229 06:57:34.261708 1 run.go:72] "command failed" err="finished without leader elect"
==> kube-scheduler [7db12b847d068ff2b282948985b4b93c9a4c392543012bea2060b89a24d374c3] <==
I1229 06:57:36.096310 1 serving.go:386] Generated self-signed cert in-memory
W1229 06:57:37.426380 1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1229 06:57:37.426428 1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1229 06:57:37.426443 1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
W1229 06:57:37.426449 1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1229 06:57:37.512626 1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
I1229 06:57:37.514708 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1229 06:57:37.517563 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1229 06:57:37.517602 1 shared_informer.go:370] "Waiting for caches to sync"
I1229 06:57:37.517836 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I1229 06:57:37.519971 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I1229 06:57:37.618160 1 shared_informer.go:377] "Caches are synced"
==> kubelet <==
Dec 29 06:57:39 functional-563786 kubelet[5253]: E1229 06:57:39.325577 5253 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ecdc8113-cbc5-4e5d-99ca-8beaca3cf1ff)\"" pod="kube-system/storage-provisioner" podUID="ecdc8113-cbc5-4e5d-99ca-8beaca3cf1ff"
Dec 29 06:57:39 functional-563786 kubelet[5253]: I1229 06:57:39.351181 5253 kubelet.go:3323] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-563786" podUID="5d450d48-8c6b-4538-a796-b453289729f6"
Dec 29 06:57:39 functional-563786 kubelet[5253]: E1229 06:57:39.351751 5253 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-xhjq7" containerName="coredns"
Dec 29 06:57:39 functional-563786 kubelet[5253]: E1229 06:57:39.352492 5253 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-563786" containerName="etcd"
Dec 29 06:57:39 functional-563786 kubelet[5253]: E1229 06:57:39.352578 5253 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-563786" containerName="kube-scheduler"
Dec 29 06:57:39 functional-563786 kubelet[5253]: I1229 06:57:39.394877 5253 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-apiserver-functional-563786"
Dec 29 06:57:39 functional-563786 kubelet[5253]: I1229 06:57:39.395348 5253 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-functional-563786"
Dec 29 06:57:39 functional-563786 kubelet[5253]: E1229 06:57:39.419038 5253 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-563786" containerName="kube-apiserver"
Dec 29 06:57:40 functional-563786 kubelet[5253]: I1229 06:57:40.359180 5253 scope.go:122] "RemoveContainer" containerID="e2ce86e6a7ef4c9f2233f28f1d638cc870bd3ca2322c249a9499dce7c70eae60"
Dec 29 06:57:40 functional-563786 kubelet[5253]: E1229 06:57:40.372275 5253 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-563786" containerName="kube-apiserver"
Dec 29 06:57:40 functional-563786 kubelet[5253]: E1229 06:57:40.936983 5253 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-563786" containerName="kube-scheduler"
Dec 29 06:57:41 functional-563786 kubelet[5253]: I1229 06:57:41.243893 5253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8eac2ae3621cd23e1820ec119dd0a660" path="/var/lib/kubelet/pods/8eac2ae3621cd23e1820ec119dd0a660/volumes"
Dec 29 06:57:43 functional-563786 kubelet[5253]: I1229 06:57:43.329215 5253 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness"
Dec 29 06:57:43 functional-563786 kubelet[5253]: E1229 06:57:43.329521 5253 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-xhjq7" containerName="coredns"
Dec 29 06:57:43 functional-563786 kubelet[5253]: I1229 06:57:43.343473 5253 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-563786" podStartSLOduration=4.343463565 podStartE2EDuration="4.343463565s" podCreationTimestamp="2025-12-29 06:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-29 06:57:43.343170505 +0000 UTC m=+8.255738116" watchObservedRunningTime="2025-12-29 06:57:43.343463565 +0000 UTC m=+8.256031177"
Dec 29 06:57:43 functional-563786 kubelet[5253]: E1229 06:57:43.379417 5253 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-xhjq7" containerName="coredns"
Dec 29 06:57:43 functional-563786 kubelet[5253]: E1229 06:57:43.542184 5253 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-563786" containerName="kube-controller-manager"
Dec 29 06:57:44 functional-563786 kubelet[5253]: E1229 06:57:44.434013 5253 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-563786" containerName="etcd"
Dec 29 06:57:47 functional-563786 kubelet[5253]: E1229 06:57:47.036131 5253 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-563786" containerName="kube-apiserver"
Dec 29 06:57:47 functional-563786 kubelet[5253]: E1229 06:57:47.393128 5253 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-563786" containerName="kube-apiserver"
Dec 29 06:57:50 functional-563786 kubelet[5253]: E1229 06:57:50.942599 5253 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-563786" containerName="kube-scheduler"
Dec 29 06:57:51 functional-563786 kubelet[5253]: E1229 06:57:51.403645 5253 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-563786" containerName="kube-scheduler"
Dec 29 06:57:52 functional-563786 kubelet[5253]: I1229 06:57:52.237320 5253 scope.go:122] "RemoveContainer" containerID="dc953c7a44d447575b823fb9cf08af135544a3d9065c85afc282c58d3d031b69"
Dec 29 06:57:53 functional-563786 kubelet[5253]: E1229 06:57:53.547451 5253 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-563786" containerName="kube-controller-manager"
Dec 29 06:57:54 functional-563786 kubelet[5253]: E1229 06:57:54.435814 5253 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-563786" containerName="etcd"
==> storage-provisioner [dbe401d00eb2e1e789c079515ee2ffe4a93cc68036cfb9c92056aabb8a131a7e] <==
I1229 06:57:52.357175 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1229 06:57:52.367528 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1229 06:57:52.367603 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
W1229 06:57:52.371026 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
==> storage-provisioner [dc953c7a44d447575b823fb9cf08af135544a3d9065c85afc282c58d3d031b69] <==
I1229 06:57:38.788206 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1229 06:57:38.789675 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-563786 -n functional-563786
helpers_test.go:270: (dbg) Run: kubectl --context functional-563786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestFunctional/serial/ComponentHealth FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ComponentHealth (1.88s)