=== RUN TestOffline
aab_offline_test.go:55: (dbg) Run: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=3072 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=3072 --wait=true --driver=none --bootstrapper=kubeadm: exit status 80 (4m32.927822347s)
-- stdout --
* minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
- MINIKUBE_LOCATION=22054
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22054-143418/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-143418/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the none driver based on user configuration
* Starting "minikube" primary control-plane node in "minikube" cluster
* Running on localhost (CPUs=8, Memory=32093MB, Disk=297540MB) ...
* OS release is Ubuntu 22.04.5 LTS
* Found network options:
- HTTP_PROXY=172.16.1.1:1
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
- env HTTP_PROXY=172.16.1.1:1
- kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
* Configuring bridge CNI (Container Networking Interface) ...
* Configuring local host environment ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: default-storageclass, storage-provisioner
-- /stdout --
** stderr **
I1207 22:25:28.274205 147665 out.go:360] Setting OutFile to fd 1 ...
I1207 22:25:28.274298 147665 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:25:28.274306 147665 out.go:374] Setting ErrFile to fd 2...
I1207 22:25:28.274310 147665 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:25:28.274506 147665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-143418/.minikube/bin
I1207 22:25:28.274966 147665 out.go:368] Setting JSON to false
I1207 22:25:28.275748 147665 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4069,"bootTime":1765142259,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1207 22:25:28.275800 147665 start.go:143] virtualization: kvm guest
I1207 22:25:28.277886 147665 out.go:179] * minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
W1207 22:25:28.279229 147665 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22054-143418/.minikube/cache/preloaded-tarball: no such file or directory
I1207 22:25:28.279302 147665 out.go:179] - MINIKUBE_LOCATION=22054
I1207 22:25:28.279299 147665 notify.go:221] Checking for updates...
I1207 22:25:28.280566 147665 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1207 22:25:28.281702 147665 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22054-143418/kubeconfig
I1207 22:25:28.282798 147665 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-143418/.minikube
I1207 22:25:28.284007 147665 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1207 22:25:28.285181 147665 driver.go:422] Setting default libvirt URI to qemu:///system
I1207 22:25:28.298503 147665 out.go:179] * Using the none driver based on user configuration
I1207 22:25:28.299765 147665 start.go:309] selected driver: none
I1207 22:25:28.299779 147665 start.go:927] validating driver "none" against <nil>
I1207 22:25:28.299800 147665 start.go:938] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1207 22:25:28.299841 147665 start.go:1756] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
W1207 22:25:28.300151 147665 out.go:285] ! The 'none' driver does not respect the --memory flag
! The 'none' driver does not respect the --memory flag
I1207 22:25:28.300656 147665 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1207 22:25:28.300969 147665 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1207 22:25:28.300999 147665 cni.go:84] Creating CNI manager for ""
I1207 22:25:28.301044 147665 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1207 22:25:28.301053 147665 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1207 22:25:28.301121 147665 start.go:353] cluster config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1207 22:25:28.302348 147665 out.go:179] * Starting "minikube" primary control-plane node in "minikube" cluster
I1207 22:25:28.303630 147665 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/config.json ...
I1207 22:25:28.303670 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/config.json: {Name:mk5e29c640f299af9c87cda17787fc521449c7fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:28.303796 147665 start.go:360] acquireMachinesLock for minikube: {Name:mk46b5f74fc4bf176e53e5157f7a1e6e21aaae8e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1207 22:25:28.303852 147665 start.go:364] duration metric: took 43.207µs to acquireMachinesLock for "minikube"
I1207 22:25:28.303866 147665 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I1207 22:25:28.303947 147665 start.go:125] createHost starting for "" (driver="none")
I1207 22:25:28.305232 147665 out.go:179] * Running on localhost (CPUs=8, Memory=32093MB, Disk=297540MB) ...
I1207 22:25:28.306294 147665 exec_runner.go:51] Run: systemctl --version
I1207 22:25:28.308476 147665 start.go:159] libmachine.API.Create for "minikube" (driver="none")
I1207 22:25:28.308512 147665 client.go:173] LocalClient.Create starting
I1207 22:25:28.308608 147665 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22054-143418/.minikube/certs/ca.pem
I1207 22:25:28.524502 147665 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22054-143418/.minikube/certs/cert.pem
I1207 22:25:28.630604 147665 client.go:176] duration metric: took 322.077749ms to LocalClient.Create
I1207 22:25:28.630646 147665 start.go:167] duration metric: took 322.17056ms to libmachine.API.Create "minikube"
I1207 22:25:28.630653 147665 start.go:293] postStartSetup for "minikube" (driver="none")
I1207 22:25:28.630716 147665 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1207 22:25:28.630753 147665 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1207 22:25:28.642115 147665 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1207 22:25:28.642147 147665 main.go:143] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1207 22:25:28.642155 147665 main.go:143] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1207 22:25:28.644535 147665 out.go:179] * OS release is Ubuntu 22.04.5 LTS
I1207 22:25:28.645828 147665 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-143418/.minikube/addons for local assets ...
I1207 22:25:28.645924 147665 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-143418/.minikube/files for local assets ...
I1207 22:25:28.645956 147665 start.go:296] duration metric: took 15.297408ms for postStartSetup
I1207 22:25:28.646532 147665 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/config.json ...
I1207 22:25:28.646683 147665 start.go:128] duration metric: took 342.726088ms to createHost
I1207 22:25:28.646697 147665 start.go:83] releasing machines lock for "minikube", held for 342.836526ms
I1207 22:25:28.648393 147665 out.go:179] * Found network options:
I1207 22:25:28.649590 147665 out.go:179] - HTTP_PROXY=172.16.1.1:1
W1207 22:25:28.650673 147665 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (10.154.0.4).
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (10.154.0.4).
I1207 22:25:28.651894 147665 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I1207 22:25:28.653352 147665 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1207 22:25:28.653439 147665 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
W1207 22:25:28.655536 147665 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1207 22:25:28.655611 147665 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1207 22:25:28.667109 147665 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1207 22:25:28.667135 147665 start.go:496] detecting cgroup driver to use...
I1207 22:25:28.667167 147665 detect.go:190] detected "systemd" cgroup driver on host os
I1207 22:25:28.667276 147665 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1207 22:25:28.690971 147665 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1207 22:25:28.701794 147665 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1207 22:25:28.712559 147665 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
I1207 22:25:28.712617 147665 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1207 22:25:28.725324 147665 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1207 22:25:28.738043 147665 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1207 22:25:28.748393 147665 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1207 22:25:28.759717 147665 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1207 22:25:28.769826 147665 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1207 22:25:28.780893 147665 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1207 22:25:28.791238 147665 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1207 22:25:28.804755 147665 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1207 22:25:28.814251 147665 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1207 22:25:28.823444 147665 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1207 22:25:29.053758 147665 exec_runner.go:51] Run: sudo systemctl restart containerd
I1207 22:25:29.166318 147665 start.go:496] detecting cgroup driver to use...
I1207 22:25:29.166369 147665 detect.go:190] detected "systemd" cgroup driver on host os
I1207 22:25:29.166506 147665 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1207 22:25:29.193670 147665 exec_runner.go:51] Run: which cri-dockerd
I1207 22:25:29.195150 147665 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1207 22:25:29.209937 147665 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
I1207 22:25:29.209975 147665 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
I1207 22:25:29.210030 147665 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
I1207 22:25:29.223473 147665 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
I1207 22:25:29.223722 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1509715507 /etc/systemd/system/cri-docker.service.d/10-cni.conf
I1207 22:25:29.235658 147665 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I1207 22:25:29.476346 147665 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I1207 22:25:29.691224 147665 docker.go:575] configuring docker to use "systemd" as cgroup driver...
I1207 22:25:29.691380 147665 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
I1207 22:25:29.691396 147665 exec_runner.go:203] rm: /etc/docker/daemon.json
I1207 22:25:29.691445 147665 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
I1207 22:25:29.701954 147665 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (129 bytes)
I1207 22:25:29.702115 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2079246243 /etc/docker/daemon.json
I1207 22:25:29.711725 147665 exec_runner.go:51] Run: sudo systemctl reset-failed docker
I1207 22:25:29.723138 147665 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1207 22:25:29.940761 147665 exec_runner.go:51] Run: sudo systemctl restart docker
I1207 22:25:32.123967 147665 exec_runner.go:84] Completed: sudo systemctl restart docker: (2.183167593s)
I1207 22:25:32.124048 147665 exec_runner.go:51] Run: sudo systemctl is-active --quiet service docker
I1207 22:25:32.136559 147665 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
I1207 22:25:32.151606 147665 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
I1207 22:25:32.165997 147665 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I1207 22:25:32.178431 147665 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
I1207 22:25:32.394430 147665 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I1207 22:25:32.609373 147665 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1207 22:25:32.821988 147665 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
I1207 22:25:32.847692 147665 exec_runner.go:51] Run: sudo systemctl reset-failed cri-docker.service
I1207 22:25:32.859654 147665 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1207 22:25:33.067850 147665 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
I1207 22:25:33.159570 147665 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I1207 22:25:33.173683 147665 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1207 22:25:33.173752 147665 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I1207 22:25:33.175095 147665 start.go:564] Will wait 60s for crictl version
I1207 22:25:33.175148 147665 exec_runner.go:51] Run: which crictl
I1207 22:25:33.176152 147665 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
I1207 22:25:33.204354 147665 start.go:580] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 29.1.2
RuntimeApiVersion: v1
I1207 22:25:33.204436 147665 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I1207 22:25:33.228449 147665 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I1207 22:25:33.253660 147665 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.2 ...
I1207 22:25:33.254832 147665 out.go:179] - env HTTP_PROXY=172.16.1.1:1
I1207 22:25:33.255940 147665 exec_runner.go:51] Run: grep 127.0.0.1 host.minikube.internal$ /etc/hosts
I1207 22:25:33.258604 147665 out.go:179] - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
I1207 22:25:33.259595 147665 kubeadm.go:884] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1207 22:25:33.259734 147665 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
I1207 22:25:33.259744 147665 kubeadm.go:935] updating node { 10.154.0.4 8443 v1.34.2 docker true true} ...
I1207 22:25:33.259830 147665 kubeadm.go:947] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-9 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.154.0.4 --resolv-conf=/run/systemd/resolve/resolv.conf
[Install]
config:
{KubernetesVersion:v1.34.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
I1207 22:25:33.259874 147665 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
I1207 22:25:33.314885 147665 cni.go:84] Creating CNI manager for ""
I1207 22:25:33.314938 147665 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1207 22:25:33.314958 147665 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1207 22:25:33.314980 147665 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.154.0.4 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-9 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.154.0.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.154.0.4 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1207 22:25:33.315117 147665 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.154.0.4
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ubuntu-20-agent-9"
kubeletExtraArgs:
- name: "node-ip"
value: "10.154.0.4"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "10.154.0.4"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1207 22:25:33.315264 147665 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
I1207 22:25:33.326941 147665 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.2: exit status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.34.2': No such file or directory
Initiating transfer...
I1207 22:25:33.327012 147665 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.2
I1207 22:25:33.336865 147665 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
I1207 22:25:33.336869 147665 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubelet.sha256
I1207 22:25:33.337010 147665 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I1207 22:25:33.336869 147665 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
I1207 22:25:33.336947 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/cache/linux/amd64/v1.34.2/kubeadm --> /var/lib/minikube/binaries/v1.34.2/kubeadm (74027192 bytes)
I1207 22:25:33.337147 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/cache/linux/amd64/v1.34.2/kubectl --> /var/lib/minikube/binaries/v1.34.2/kubectl (60559544 bytes)
I1207 22:25:33.354328 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/cache/linux/amd64/v1.34.2/kubelet --> /var/lib/minikube/binaries/v1.34.2/kubelet (59199780 bytes)
I1207 22:25:33.391647 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1535187293 /var/lib/minikube/binaries/v1.34.2/kubeadm
I1207 22:25:33.391955 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1599990728 /var/lib/minikube/binaries/v1.34.2/kubectl
I1207 22:25:33.408667 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2149841023 /var/lib/minikube/binaries/v1.34.2/kubelet
I1207 22:25:33.461217 147665 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1207 22:25:33.471822 147665 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
I1207 22:25:33.471843 147665 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I1207 22:25:33.471885 147665 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I1207 22:25:33.482262 147665 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
I1207 22:25:33.482418 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3414592711 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I1207 22:25:33.492131 147665 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
I1207 22:25:33.492154 147665 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
I1207 22:25:33.492193 147665 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
I1207 22:25:33.501956 147665 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1207 22:25:33.502093 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2085672220 /lib/systemd/system/kubelet.service
I1207 22:25:33.511586 147665 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
I1207 22:25:33.511713 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1694213752 /var/tmp/minikube/kubeadm.yaml.new
I1207 22:25:33.521028 147665 exec_runner.go:51] Run: grep 10.154.0.4 control-plane.minikube.internal$ /etc/hosts
I1207 22:25:33.522472 147665 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1207 22:25:33.746685 147665 exec_runner.go:51] Run: sudo systemctl start kubelet
I1207 22:25:33.772520 147665 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube for IP: 10.154.0.4
I1207 22:25:33.772546 147665 certs.go:195] generating shared ca certs ...
I1207 22:25:33.772569 147665 certs.go:227] acquiring lock for ca certs: {Name:mk756b67f774ebe569237928b30a85bf8fe75494 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:33.772705 147665 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-143418/.minikube/ca.key
I1207 22:25:33.828710 147665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-143418/.minikube/ca.crt ...
I1207 22:25:33.828741 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/.minikube/ca.crt: {Name:mkf707e982cd1975cdfed9bd0f88a51bd4bb2311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:33.828932 147665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-143418/.minikube/ca.key ...
I1207 22:25:33.828945 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/.minikube/ca.key: {Name:mkf1cfbdbfeb609bcad88fa256b2b720e4e484cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:33.829023 147665 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-143418/.minikube/proxy-client-ca.key
I1207 22:25:34.037102 147665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-143418/.minikube/proxy-client-ca.crt ...
I1207 22:25:34.037144 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/.minikube/proxy-client-ca.crt: {Name:mk7b713d47de91a7370d2a3743da5cf0cb626f72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:34.037362 147665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-143418/.minikube/proxy-client-ca.key ...
I1207 22:25:34.037379 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/.minikube/proxy-client-ca.key: {Name:mk0a4ac79bec3e1fc8cd773ed7ef4c7a6a4ba544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:34.037486 147665 certs.go:257] generating profile certs ...
I1207 22:25:34.037569 147665 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.key
I1207 22:25:34.037590 147665 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.crt with IP's: []
I1207 22:25:34.110750 147665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.crt ...
I1207 22:25:34.110787 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.crt: {Name:mk59b88b3b2c4d0b9807a7cb184b72d53eefdf84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:34.111001 147665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.key ...
I1207 22:25:34.111021 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.key: {Name:mk18739e8c5dd7f2c8826f3c1110af35fd754287 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:34.111140 147665 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.key.1b9420d6
I1207 22:25:34.111167 147665 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.crt.1b9420d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.154.0.4]
I1207 22:25:34.312674 147665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.crt.1b9420d6 ...
I1207 22:25:34.312709 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.crt.1b9420d6: {Name:mk4f6c1b545e8ae337a7ede0b19579897cc42222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:34.312926 147665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.key.1b9420d6 ...
I1207 22:25:34.312946 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.key.1b9420d6: {Name:mkd8f9b36319cb05e8e64578162229dc1eb249bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:34.313065 147665 certs.go:382] copying /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.crt.1b9420d6 -> /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.crt
I1207 22:25:34.313172 147665 certs.go:386] copying /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.key.1b9420d6 -> /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.key
I1207 22:25:34.313251 147665 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/proxy-client.key
I1207 22:25:34.313277 147665 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I1207 22:25:34.345482 147665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/proxy-client.crt ...
I1207 22:25:34.345508 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/proxy-client.crt: {Name:mk0d9927fe9e0c49ca4dc5fb1acde53600af5bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:34.345696 147665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/proxy-client.key ...
I1207 22:25:34.345713 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/proxy-client.key: {Name:mk0403b293b12bc4e987308ab843b011900ffaff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:34.345922 147665 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-143418/.minikube/certs/ca-key.pem (1675 bytes)
I1207 22:25:34.345988 147665 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-143418/.minikube/certs/ca.pem (1082 bytes)
I1207 22:25:34.346031 147665 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-143418/.minikube/certs/cert.pem (1123 bytes)
I1207 22:25:34.346074 147665 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-143418/.minikube/certs/key.pem (1675 bytes)
I1207 22:25:34.346773 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1207 22:25:34.346981 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2368941701 /var/lib/minikube/certs/ca.crt
I1207 22:25:34.358419 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1207 22:25:34.358562 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1349169189 /var/lib/minikube/certs/ca.key
I1207 22:25:34.369161 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1207 22:25:34.369303 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube323379477 /var/lib/minikube/certs/proxy-client-ca.crt
I1207 22:25:34.379877 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1207 22:25:34.380021 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube934755363 /var/lib/minikube/certs/proxy-client-ca.key
I1207 22:25:34.390042 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
I1207 22:25:34.390214 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3702837495 /var/lib/minikube/certs/apiserver.crt
I1207 22:25:34.400239 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1207 22:25:34.400406 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1240425133 /var/lib/minikube/certs/apiserver.key
I1207 22:25:34.410990 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1207 22:25:34.411151 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2613394794 /var/lib/minikube/certs/proxy-client.crt
I1207 22:25:34.421608 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1207 22:25:34.421749 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1094857425 /var/lib/minikube/certs/proxy-client.key
I1207 22:25:34.431942 147665 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
I1207 22:25:34.431962 147665 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
I1207 22:25:34.432007 147665 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
I1207 22:25:34.441329 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1207 22:25:34.441505 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4154313566 /usr/share/ca-certificates/minikubeCA.pem
I1207 22:25:34.451360 147665 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1207 22:25:34.451496 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube186732915 /var/lib/minikube/kubeconfig
I1207 22:25:34.461695 147665 exec_runner.go:51] Run: openssl version
I1207 22:25:34.464627 147665 exec_runner.go:51] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1207 22:25:34.474313 147665 exec_runner.go:51] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1207 22:25:34.483927 147665 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1207 22:25:34.485829 147665 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Dec 7 22:25 /usr/share/ca-certificates/minikubeCA.pem
I1207 22:25:34.485870 147665 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1207 22:25:34.490624 147665 exec_runner.go:51] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1207 22:25:34.499940 147665 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1207 22:25:34.501363 147665 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1207 22:25:34.501433 147665 kubeadm.go:401] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1207 22:25:34.501542 147665 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1207 22:25:34.519219 147665 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1207 22:25:34.530018 147665 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1207 22:25:34.539892 147665 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I1207 22:25:34.562810 147665 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1207 22:25:34.573726 147665 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1207 22:25:34.573748 147665 kubeadm.go:158] found existing configuration files:
I1207 22:25:34.573793 147665 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1207 22:25:34.583843 147665 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1207 22:25:34.583928 147665 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
I1207 22:25:34.594068 147665 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1207 22:25:34.605246 147665 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1207 22:25:34.605322 147665 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1207 22:25:34.616778 147665 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1207 22:25:34.628443 147665 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1207 22:25:34.628500 147665 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1207 22:25:34.637912 147665 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1207 22:25:34.647162 147665 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1207 22:25:34.647231 147665 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1207 22:25:34.656360 147665 exec_runner.go:97] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1207 22:25:34.694064 147665 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
I1207 22:25:34.694566 147665 kubeadm.go:319] [preflight] Running pre-flight checks
I1207 22:25:34.777808 147665 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1207 22:25:34.777997 147665 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1207 22:25:34.778019 147665 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1207 22:25:34.778024 147665 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1207 22:25:43.664598 147665 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1207 22:25:43.668022 147665 out.go:252] - Generating certificates and keys ...
I1207 22:25:43.668078 147665 kubeadm.go:319] [certs] Using existing ca certificate authority
I1207 22:25:43.668090 147665 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1207 22:25:44.077096 147665 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1207 22:25:44.384482 147665 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1207 22:25:44.890987 147665 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1207 22:25:45.188777 147665 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1207 22:25:45.430525 147665 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1207 22:25:45.430585 147665 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-9] and IPs [10.154.0.4 127.0.0.1 ::1]
I1207 22:25:45.748596 147665 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1207 22:25:45.748645 147665 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-9] and IPs [10.154.0.4 127.0.0.1 ::1]
I1207 22:25:46.106344 147665 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1207 22:25:46.338752 147665 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1207 22:25:46.393314 147665 kubeadm.go:319] [certs] Generating "sa" key and public key
I1207 22:25:46.393464 147665 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1207 22:25:46.448625 147665 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1207 22:25:46.540275 147665 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1207 22:25:46.624864 147665 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1207 22:25:46.727521 147665 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1207 22:25:46.894940 147665 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1207 22:25:46.895564 147665 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1207 22:25:46.897640 147665 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1207 22:25:46.899990 147665 out.go:252] - Booting up control plane ...
I1207 22:25:46.900017 147665 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1207 22:25:46.900032 147665 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1207 22:25:46.900485 147665 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1207 22:25:46.914344 147665 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1207 22:25:46.914373 147665 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1207 22:25:46.919027 147665 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1207 22:25:46.919335 147665 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1207 22:25:46.919367 147665 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1207 22:25:47.178057 147665 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1207 22:25:47.178087 147665 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1207 22:25:47.678392 147665 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.054489ms
I1207 22:25:47.682894 147665 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1207 22:25:47.682938 147665 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://10.154.0.4:8443/livez
I1207 22:25:47.682954 147665 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1207 22:25:47.682961 147665 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1207 22:25:51.111763 147665 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.428722009s
I1207 22:25:51.126658 147665 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.443646836s
I1207 22:25:52.684475 147665 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001439698s
I1207 22:25:52.700728 147665 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1207 22:25:52.710399 147665 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1207 22:25:52.719383 147665 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1207 22:25:52.719409 147665 kubeadm.go:319] [mark-control-plane] Marking the node ubuntu-20-agent-9 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1207 22:25:52.727186 147665 kubeadm.go:319] [bootstrap-token] Using token: e12c2p.5g90k451rleqyeb6
I1207 22:25:52.728404 147665 out.go:252] - Configuring RBAC rules ...
I1207 22:25:52.728435 147665 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1207 22:25:52.731303 147665 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1207 22:25:52.736035 147665 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1207 22:25:52.738184 147665 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1207 22:25:52.741476 147665 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1207 22:25:52.743701 147665 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1207 22:25:53.090726 147665 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1207 22:25:53.509242 147665 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1207 22:25:54.090613 147665 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1207 22:25:54.091468 147665 kubeadm.go:319]
I1207 22:25:54.091482 147665 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1207 22:25:54.091486 147665 kubeadm.go:319]
I1207 22:25:54.091492 147665 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1207 22:25:54.091495 147665 kubeadm.go:319]
I1207 22:25:54.091498 147665 kubeadm.go:319] mkdir -p $HOME/.kube
I1207 22:25:54.091502 147665 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1207 22:25:54.091506 147665 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1207 22:25:54.091509 147665 kubeadm.go:319]
I1207 22:25:54.091512 147665 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1207 22:25:54.091515 147665 kubeadm.go:319]
I1207 22:25:54.091519 147665 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1207 22:25:54.091521 147665 kubeadm.go:319]
I1207 22:25:54.091532 147665 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1207 22:25:54.091536 147665 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1207 22:25:54.091540 147665 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1207 22:25:54.091544 147665 kubeadm.go:319]
I1207 22:25:54.091548 147665 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1207 22:25:54.091553 147665 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1207 22:25:54.091557 147665 kubeadm.go:319]
I1207 22:25:54.091561 147665 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token e12c2p.5g90k451rleqyeb6 \
I1207 22:25:54.091566 147665 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:087e447345576a980037f7d5af4da5db4a36d22d65539224452920fd9c24b719 \
I1207 22:25:54.091570 147665 kubeadm.go:319] --control-plane
I1207 22:25:54.091574 147665 kubeadm.go:319]
I1207 22:25:54.091581 147665 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1207 22:25:54.091584 147665 kubeadm.go:319]
I1207 22:25:54.091589 147665 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token e12c2p.5g90k451rleqyeb6 \
I1207 22:25:54.091594 147665 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:087e447345576a980037f7d5af4da5db4a36d22d65539224452920fd9c24b719
I1207 22:25:54.094795 147665 cni.go:84] Creating CNI manager for ""
I1207 22:25:54.094820 147665 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1207 22:25:54.096570 147665 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1207 22:25:54.097852 147665 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
I1207 22:25:54.109977 147665 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (494 bytes)
I1207 22:25:54.110150 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2558285054 /etc/cni/net.d/1-k8s.conflist
I1207 22:25:54.124731 147665 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1207 22:25:54.124831 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1207 22:25:54.124868 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-9 minikube.k8s.io/updated_at=2025_12_07T22_25_54_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
I1207 22:25:54.134872 147665 ops.go:34] apiserver oom_adj: -16
I1207 22:25:54.196114 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1207 22:25:54.696313 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1207 22:25:55.197017 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1207 22:25:55.696362 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1207 22:25:56.197035 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1207 22:25:56.697094 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1207 22:25:57.196176 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1207 22:25:57.696461 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1207 22:25:58.196781 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1207 22:25:58.696242 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1207 22:25:58.763673 147665 kubeadm.go:1114] duration metric: took 4.63892707s to wait for elevateKubeSystemPrivileges
I1207 22:25:58.763715 147665 kubeadm.go:403] duration metric: took 24.262316876s to StartCluster
I1207 22:25:58.763737 147665 settings.go:142] acquiring lock: {Name:mkbfc722e6671966de34364f67dfb7d69b6080e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:58.763840 147665 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22054-143418/kubeconfig
I1207 22:25:58.764193 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/kubeconfig: {Name:mkf50b93a3b2c6268054a4d9269b7d6c1599cc6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:58.764378 147665 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1207 22:25:58.764423 147665 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1207 22:25:58.764550 147665 addons.go:70] Setting storage-provisioner=true in profile "minikube"
I1207 22:25:58.764581 147665 addons.go:239] Setting addon storage-provisioner=true in "minikube"
I1207 22:25:58.764614 147665 host.go:66] Checking if "minikube" exists ...
I1207 22:25:58.764617 147665 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1207 22:25:58.764550 147665 addons.go:70] Setting default-storageclass=true in profile "minikube"
I1207 22:25:58.764680 147665 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I1207 22:25:58.765484 147665 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I1207 22:25:58.765509 147665 api_server.go:166] Checking apiserver status ...
I1207 22:25:58.765539 147665 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1207 22:25:58.765881 147665 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I1207 22:25:58.765924 147665 api_server.go:166] Checking apiserver status ...
I1207 22:25:58.765962 147665 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1207 22:25:58.766111 147665 out.go:179] * Configuring local host environment ...
W1207 22:25:58.767264 147665 out.go:285] *
*
W1207 22:25:58.767284 147665 out.go:285] ! The 'none' driver is designed for experts who need to integrate with an existing VM
! The 'none' driver is designed for experts who need to integrate with an existing VM
W1207 22:25:58.767291 147665 out.go:285] * Most users should use the newer 'docker' driver instead, which does not require root!
* Most users should use the newer 'docker' driver instead, which does not require root!
W1207 22:25:58.767299 147665 out.go:285] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
* For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
W1207 22:25:58.767309 147665 out.go:285] *
*
W1207 22:25:58.767350 147665 out.go:285] ! kubectl and minikube configuration will be stored in /home/jenkins
! kubectl and minikube configuration will be stored in /home/jenkins
W1207 22:25:58.767358 147665 out.go:285] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
W1207 22:25:58.767365 147665 out.go:285] *
*
W1207 22:25:58.767387 147665 out.go:285] - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
- sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
W1207 22:25:58.767394 147665 out.go:285] - sudo chown -R $USER $HOME/.kube $HOME/.minikube
- sudo chown -R $USER $HOME/.kube $HOME/.minikube
W1207 22:25:58.767401 147665 out.go:285] *
*
W1207 22:25:58.767407 147665 out.go:285] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
* This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
I1207 22:25:58.767435 147665 start.go:236] Will wait 6m0s for node &{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I1207 22:25:58.768865 147665 out.go:179] * Verifying Kubernetes components...
I1207 22:25:58.770287 147665 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1207 22:25:58.786287 147665 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/149457/cgroup
I1207 22:25:58.786343 147665 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/149457/cgroup
W1207 22:25:58.800849 147665 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/149457/cgroup: exit status 1
stdout:
stderr:
I1207 22:25:58.800936 147665 exec_runner.go:51] Run: ls
W1207 22:25:58.800987 147665 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/149457/cgroup: exit status 1
stdout:
stderr:
I1207 22:25:58.801040 147665 exec_runner.go:51] Run: ls
I1207 22:25:58.802734 147665 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I1207 22:25:58.802788 147665 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I1207 22:25:58.811282 147665 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I1207 22:25:58.811515 147665 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I1207 22:25:58.812153 147665 kapi.go:59] client config for minikube: &rest.Config{Host:"https://10.154.0.4:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.key", CAFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1207 22:25:58.812757 147665 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1207 22:25:58.812780 147665 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1207 22:25:58.812788 147665 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1207 22:25:58.812793 147665 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1207 22:25:58.812800 147665 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1207 22:25:58.812939 147665 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1207 22:25:58.813314 147665 addons.go:239] Setting addon default-storageclass=true in "minikube"
I1207 22:25:58.813355 147665 host.go:66] Checking if "minikube" exists ...
I1207 22:25:58.814060 147665 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I1207 22:25:58.814081 147665 api_server.go:166] Checking apiserver status ...
I1207 22:25:58.814120 147665 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1207 22:25:58.814297 147665 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1207 22:25:58.814337 147665 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1207 22:25:58.814490 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube127227585 /etc/kubernetes/addons/storage-provisioner.yaml
I1207 22:25:58.828214 147665 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1207 22:25:58.838182 147665 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/149457/cgroup
W1207 22:25:58.856702 147665 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/149457/cgroup: exit status 1
stdout:
stderr:
I1207 22:25:58.856864 147665 exec_runner.go:51] Run: ls
I1207 22:25:58.859626 147665 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1207 22:25:58.867350 147665 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I1207 22:25:58.874895 147665 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I1207 22:25:58.874961 147665 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1207 22:25:58.874984 147665 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1207 22:25:58.875149 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3146531359 /etc/kubernetes/addons/storageclass.yaml
I1207 22:25:58.896595 147665 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1207 22:25:59.077178 147665 start.go:977] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
I1207 22:25:59.077952 147665 kapi.go:59] client config for minikube: &rest.Config{Host:"https://10.154.0.4:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.key", CAFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1207 22:25:59.113547 147665 exec_runner.go:51] Run: sudo systemctl start kubelet
I1207 22:25:59.134356 147665 kapi.go:59] client config for minikube: &rest.Config{Host:"https://10.154.0.4:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.key", CAFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1207 22:25:59.134660 147665 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-9" to be "Ready" ...
I1207 22:25:59.137031 147665 node_ready.go:49] node "ubuntu-20-agent-9" is "Ready"
I1207 22:25:59.137055 147665 node_ready.go:38] duration metric: took 2.373807ms for node "ubuntu-20-agent-9" to be "Ready" ...
I1207 22:25:59.137070 147665 api_server.go:52] waiting for apiserver process to appear ...
I1207 22:25:59.137113 147665 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1207 22:25:59.154311 147665 api_server.go:72] duration metric: took 386.825267ms to wait for apiserver process to appear ...
I1207 22:25:59.154345 147665 api_server.go:88] waiting for apiserver healthz status ...
I1207 22:25:59.154368 147665 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I1207 22:25:59.158931 147665 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I1207 22:25:59.159832 147665 api_server.go:141] control plane version: v1.34.2
I1207 22:25:59.159860 147665 api_server.go:131] duration metric: took 5.507193ms to wait for apiserver health ...
I1207 22:25:59.159870 147665 system_pods.go:43] waiting for kube-system pods to appear ...
I1207 22:25:59.162913 147665 system_pods.go:59] 5 kube-system pods found
I1207 22:25:59.162961 147665 system_pods.go:61] "etcd-ubuntu-20-agent-9" [984fb506-2b95-4fd2-a3f6-90bab91673d6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1207 22:25:59.162977 147665 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-9" [a51c8055-c56e-4328-b3ef-008eb04dc72e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1207 22:25:59.162986 147665 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-9" [bbb55451-ce2b-45f3-8da0-9400f5b4922d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1207 22:25:59.162998 147665 system_pods.go:61] "kube-proxy-9wczh" [6fa0d934-a836-42d2-b765-5ec6a1604cb1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1207 22:25:59.163006 147665 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-9" [ec68de60-e8ac-4eef-98e6-13e0c3e44169] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1207 22:25:59.163013 147665 system_pods.go:74] duration metric: took 3.06288ms to wait for pod list to return data ...
I1207 22:25:59.163023 147665 default_sa.go:34] waiting for default service account to be created ...
I1207 22:25:59.165017 147665 default_sa.go:45] found service account: "default"
I1207 22:25:59.165049 147665 default_sa.go:55] duration metric: took 2.019427ms for default service account to be created ...
I1207 22:25:59.165062 147665 system_pods.go:116] waiting for k8s-apps to be running ...
I1207 22:25:59.192128 147665 system_pods.go:86] 6 kube-system pods found
I1207 22:25:59.192165 147665 system_pods.go:89] "etcd-ubuntu-20-agent-9" [984fb506-2b95-4fd2-a3f6-90bab91673d6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1207 22:25:59.192178 147665 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-9" [a51c8055-c56e-4328-b3ef-008eb04dc72e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1207 22:25:59.192188 147665 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-9" [bbb55451-ce2b-45f3-8da0-9400f5b4922d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1207 22:25:59.192198 147665 system_pods.go:89] "kube-proxy-9wczh" [6fa0d934-a836-42d2-b765-5ec6a1604cb1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1207 22:25:59.192208 147665 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-9" [ec68de60-e8ac-4eef-98e6-13e0c3e44169] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1207 22:25:59.192216 147665 system_pods.go:89] "storage-provisioner" [1fca3b52-fea6-4632-a2dd-edabd44d6fa7] Pending
I1207 22:25:59.192266 147665 retry.go:31] will retry after 269.585449ms: missing components: kube-dns, kube-proxy
I1207 22:25:59.200487 147665 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
I1207 22:25:59.202243 147665 addons.go:530] duration metric: took 437.831066ms for enable addons: enabled=[default-storageclass storage-provisioner]
I1207 22:25:59.466622 147665 system_pods.go:86] 8 kube-system pods found
I1207 22:25:59.466663 147665 system_pods.go:89] "coredns-66bc5c9577-2tjgd" [51678b09-97ed-4f1b-86a2-6bf589b0df9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1207 22:25:59.466676 147665 system_pods.go:89] "coredns-66bc5c9577-kt4qp" [18a87cdb-7c0e-44bc-a68d-97c9c261e65c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1207 22:25:59.466694 147665 system_pods.go:89] "etcd-ubuntu-20-agent-9" [984fb506-2b95-4fd2-a3f6-90bab91673d6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1207 22:25:59.466703 147665 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-9" [a51c8055-c56e-4328-b3ef-008eb04dc72e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1207 22:25:59.466711 147665 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-9" [bbb55451-ce2b-45f3-8da0-9400f5b4922d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1207 22:25:59.466720 147665 system_pods.go:89] "kube-proxy-9wczh" [6fa0d934-a836-42d2-b765-5ec6a1604cb1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1207 22:25:59.466729 147665 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-9" [ec68de60-e8ac-4eef-98e6-13e0c3e44169] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1207 22:25:59.466736 147665 system_pods.go:89] "storage-provisioner" [1fca3b52-fea6-4632-a2dd-edabd44d6fa7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1207 22:25:59.466756 147665 retry.go:31] will retry after 244.246793ms: missing components: kube-dns, kube-proxy
I1207 22:25:59.581065 147665 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
I1207 22:25:59.714712 147665 system_pods.go:86] 8 kube-system pods found
I1207 22:25:59.714749 147665 system_pods.go:89] "coredns-66bc5c9577-2tjgd" [51678b09-97ed-4f1b-86a2-6bf589b0df9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1207 22:25:59.714759 147665 system_pods.go:89] "coredns-66bc5c9577-kt4qp" [18a87cdb-7c0e-44bc-a68d-97c9c261e65c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1207 22:25:59.714769 147665 system_pods.go:89] "etcd-ubuntu-20-agent-9" [984fb506-2b95-4fd2-a3f6-90bab91673d6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1207 22:25:59.714780 147665 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-9" [a51c8055-c56e-4328-b3ef-008eb04dc72e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1207 22:25:59.714788 147665 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-9" [bbb55451-ce2b-45f3-8da0-9400f5b4922d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1207 22:25:59.714801 147665 system_pods.go:89] "kube-proxy-9wczh" [6fa0d934-a836-42d2-b765-5ec6a1604cb1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1207 22:25:59.714812 147665 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-9" [ec68de60-e8ac-4eef-98e6-13e0c3e44169] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1207 22:25:59.714823 147665 system_pods.go:89] "storage-provisioner" [1fca3b52-fea6-4632-a2dd-edabd44d6fa7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1207 22:25:59.714846 147665 retry.go:31] will retry after 388.623519ms: missing components: kube-dns, kube-proxy
I1207 22:26:00.108562 147665 system_pods.go:86] 8 kube-system pods found
I1207 22:26:00.108603 147665 system_pods.go:89] "coredns-66bc5c9577-2tjgd" [51678b09-97ed-4f1b-86a2-6bf589b0df9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1207 22:26:00.108615 147665 system_pods.go:89] "coredns-66bc5c9577-kt4qp" [18a87cdb-7c0e-44bc-a68d-97c9c261e65c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1207 22:26:00.108623 147665 system_pods.go:89] "etcd-ubuntu-20-agent-9" [984fb506-2b95-4fd2-a3f6-90bab91673d6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1207 22:26:00.108637 147665 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-9" [a51c8055-c56e-4328-b3ef-008eb04dc72e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1207 22:26:00.108645 147665 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-9" [bbb55451-ce2b-45f3-8da0-9400f5b4922d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1207 22:26:00.108659 147665 system_pods.go:89] "kube-proxy-9wczh" [6fa0d934-a836-42d2-b765-5ec6a1604cb1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1207 22:26:00.108667 147665 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-9" [ec68de60-e8ac-4eef-98e6-13e0c3e44169] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1207 22:26:00.108688 147665 system_pods.go:89] "storage-provisioner" [1fca3b52-fea6-4632-a2dd-edabd44d6fa7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1207 22:26:00.108709 147665 retry.go:31] will retry after 497.4269ms: missing components: kube-dns, kube-proxy
I1207 22:26:00.610116 147665 system_pods.go:86] 8 kube-system pods found
I1207 22:26:00.610151 147665 system_pods.go:89] "coredns-66bc5c9577-2tjgd" [51678b09-97ed-4f1b-86a2-6bf589b0df9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1207 22:26:00.610162 147665 system_pods.go:89] "coredns-66bc5c9577-kt4qp" [18a87cdb-7c0e-44bc-a68d-97c9c261e65c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1207 22:26:00.610170 147665 system_pods.go:89] "etcd-ubuntu-20-agent-9" [984fb506-2b95-4fd2-a3f6-90bab91673d6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1207 22:26:00.610180 147665 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-9" [a51c8055-c56e-4328-b3ef-008eb04dc72e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1207 22:26:00.610189 147665 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-9" [bbb55451-ce2b-45f3-8da0-9400f5b4922d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1207 22:26:00.610198 147665 system_pods.go:89] "kube-proxy-9wczh" [6fa0d934-a836-42d2-b765-5ec6a1604cb1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1207 22:26:00.610206 147665 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-9" [ec68de60-e8ac-4eef-98e6-13e0c3e44169] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1207 22:26:00.610219 147665 system_pods.go:89] "storage-provisioner" [1fca3b52-fea6-4632-a2dd-edabd44d6fa7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1207 22:26:00.610243 147665 retry.go:31] will retry after 500.774576ms: missing components: kube-dns, kube-proxy
I1207 22:26:01.115352 147665 system_pods.go:86] 7 kube-system pods found
I1207 22:26:01.115414 147665 system_pods.go:89] "coredns-66bc5c9577-kt4qp" [18a87cdb-7c0e-44bc-a68d-97c9c261e65c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1207 22:26:01.115423 147665 system_pods.go:89] "etcd-ubuntu-20-agent-9" [984fb506-2b95-4fd2-a3f6-90bab91673d6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1207 22:26:01.115431 147665 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-9" [a51c8055-c56e-4328-b3ef-008eb04dc72e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1207 22:26:01.115436 147665 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-9" [bbb55451-ce2b-45f3-8da0-9400f5b4922d] Running
I1207 22:26:01.115440 147665 system_pods.go:89] "kube-proxy-9wczh" [6fa0d934-a836-42d2-b765-5ec6a1604cb1] Running
I1207 22:26:01.115445 147665 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-9" [ec68de60-e8ac-4eef-98e6-13e0c3e44169] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1207 22:26:01.115454 147665 system_pods.go:89] "storage-provisioner" [1fca3b52-fea6-4632-a2dd-edabd44d6fa7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1207 22:26:01.115461 147665 system_pods.go:126] duration metric: took 1.950393197s to wait for k8s-apps to be running ...
I1207 22:26:01.115473 147665 system_svc.go:44] waiting for kubelet service to be running ....
I1207 22:26:01.115526 147665 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I1207 22:26:01.132035 147665 system_svc.go:56] duration metric: took 16.547928ms WaitForService to wait for kubelet
I1207 22:26:01.132070 147665 kubeadm.go:587] duration metric: took 2.364598088s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1207 22:26:01.132099 147665 node_conditions.go:102] verifying NodePressure condition ...
I1207 22:26:01.135428 147665 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1207 22:26:01.135467 147665 node_conditions.go:123] node cpu capacity is 8
I1207 22:26:01.135488 147665 node_conditions.go:105] duration metric: took 3.381382ms to run NodePressure ...
I1207 22:26:01.135503 147665 start.go:242] waiting for startup goroutines ...
I1207 22:26:01.135513 147665 start.go:247] waiting for cluster config update ...
I1207 22:26:01.135528 147665 start.go:256] writing updated cluster config ...
I1207 22:26:01.135826 147665 exec_runner.go:51] Run: rm -f paused
I1207 22:26:01.137161 147665 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1207 22:26:01.137654 147665 kapi.go:59] client config for minikube: &rest.Config{Host:"https://10.154.0.4:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.key", CAFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1207 22:26:01.140418 147665 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kt4qp" in "kube-system" namespace to be "Ready" or be gone ...
W1207 22:26:03.145479 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:05.146328 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:07.645928 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:10.145567 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:12.145642 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:14.146051 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:16.646154 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:19.145802 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:21.146106 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:23.645962 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:26.146240 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:28.645806 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:31.146616 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:33.645657 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:35.646276 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:38.145400 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:40.146482 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:42.646386 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:45.145288 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:47.145355 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:49.145613 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:51.146395 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:53.647273 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:56.146009 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:58.645667 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:01.145850 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:03.644786 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:05.646053 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:08.146382 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:10.645703 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:13.145777 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:15.146059 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:17.146362 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:19.646192 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:21.646418 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:24.145676 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:26.146427 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:28.645480 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:30.645736 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:32.646207 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:35.145992 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:37.146073 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:39.646241 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:42.145798 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:44.146334 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:46.646086 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:49.145008 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:51.145856 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:53.146183 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:55.645418 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:57.645945 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:00.146600 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:02.645866 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:05.146286 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:07.646061 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:10.144873 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:12.145411 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:14.146028 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:16.146493 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:18.645538 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:20.645773 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:23.146055 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:25.645551 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:27.646010 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:30.146619 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:32.645420 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:34.645609 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:37.146641 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:39.645872 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:42.145657 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:44.145988 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:46.646439 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:49.145310 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:51.145432 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:53.145819 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:55.645369 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:57.646456 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:00.146228 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:02.645463 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:05.145471 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:07.145999 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:09.646135 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:12.145428 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:14.146058 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:16.646259 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:18.646411 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:21.146455 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:23.646148 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:26.145742 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:28.145869 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:30.645962 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:32.646402 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:35.145955 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:37.146167 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:39.646215 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:42.145229 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:44.145793 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:46.645956 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:48.646453 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:51.145356 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:53.145876 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:55.645179 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:57.645364 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:59.646412 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
I1207 22:30:01.137864 147665 pod_ready.go:86] duration metric: took 3m59.997404779s for pod "coredns-66bc5c9577-kt4qp" in "kube-system" namespace to be "Ready" or be gone ...
W1207 22:30:01.137914 147665 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-dns" label are "Ready", will retry: waitPodCondition: context deadline exceeded
I1207 22:30:01.137933 147665 pod_ready.go:40] duration metric: took 4m0.00073426s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1207 22:30:01.140396 147665 out.go:203]
W1207 22:30:01.141740 147665 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
I1207 22:30:01.142814 147665 out.go:203]
** /stderr **
aab_offline_test.go:58: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=3072 --wait=true --driver=none --bootstrapper=kubeadm failed: exit status 80
panic.go:615: *** TestOffline FAILED at 2025-12-07 22:30:01.170220626 +0000 UTC m=+280.818256123
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestOffline]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:252: <<< TestOffline FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestOffline]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:260: TestOffline logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ start │ -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm │ minikube │ jenkins │ v1.37.0 │ 07 Dec 25 22:25 UTC │ │
│ delete │ --all │ minikube │ jenkins │ v1.37.0 │ 07 Dec 25 22:25 UTC │ 07 Dec 25 22:25 UTC │
│ delete │ -p minikube │ minikube │ jenkins │ v1.37.0 │ 07 Dec 25 22:25 UTC │ 07 Dec 25 22:25 UTC │
│ start │ -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=none --bootstrapper=kubeadm │ minikube │ jenkins │ v1.37.0 │ 07 Dec 25 22:25 UTC │ │
│ delete │ --all │ minikube │ jenkins │ v1.37.0 │ 07 Dec 25 22:25 UTC │ 07 Dec 25 22:25 UTC │
│ delete │ -p minikube │ minikube │ jenkins │ v1.37.0 │ 07 Dec 25 22:25 UTC │ 07 Dec 25 22:25 UTC │
│ start │ -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm │ minikube │ jenkins │ v1.37.0 │ 07 Dec 25 22:25 UTC │ │
│ delete │ --all │ minikube │ jenkins │ v1.37.0 │ 07 Dec 25 22:25 UTC │ 07 Dec 25 22:25 UTC │
│ delete │ -p minikube │ minikube │ jenkins │ v1.37.0 │ 07 Dec 25 22:25 UTC │ 07 Dec 25 22:25 UTC │
│ delete │ -p minikube │ minikube │ jenkins │ v1.37.0 │ 07 Dec 25 22:25 UTC │ 07 Dec 25 22:25 UTC │
│ delete │ -p minikube │ minikube │ jenkins │ v1.37.0 │ 07 Dec 25 22:25 UTC │ 07 Dec 25 22:25 UTC │
│ delete │ -p minikube │ minikube │ jenkins │ v1.37.0 │ 07 Dec 25 22:25 UTC │ 07 Dec 25 22:25 UTC │
│ start │ --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:42743 --driver=none --bootstrapper=kubeadm │ minikube │ jenkins │ v1.37.0 │ 07 Dec 25 22:25 UTC │ │
│ delete │ -p minikube │ minikube │ jenkins │ v1.37.0 │ 07 Dec 25 22:25 UTC │ 07 Dec 25 22:25 UTC │
│ start │ -p minikube --alsologtostderr -v=1 --memory=3072 --wait=true --driver=none --bootstrapper=kubeadm │ minikube │ jenkins │ v1.37.0 │ 07 Dec 25 22:25 UTC │ │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/07 22:25:28
Running on machine: ubuntu-20-agent-9
Binary: Built with gc go1.25.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1207 22:25:28.274205 147665 out.go:360] Setting OutFile to fd 1 ...
I1207 22:25:28.274298 147665 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:25:28.274306 147665 out.go:374] Setting ErrFile to fd 2...
I1207 22:25:28.274310 147665 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 22:25:28.274506 147665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-143418/.minikube/bin
I1207 22:25:28.274966 147665 out.go:368] Setting JSON to false
I1207 22:25:28.275748 147665 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4069,"bootTime":1765142259,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1207 22:25:28.275800 147665 start.go:143] virtualization: kvm guest
I1207 22:25:28.277886 147665 out.go:179] * minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
W1207 22:25:28.279229 147665 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22054-143418/.minikube/cache/preloaded-tarball: no such file or directory
I1207 22:25:28.279302 147665 out.go:179] - MINIKUBE_LOCATION=22054
I1207 22:25:28.279299 147665 notify.go:221] Checking for updates...
I1207 22:25:28.280566 147665 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1207 22:25:28.281702 147665 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22054-143418/kubeconfig
I1207 22:25:28.282798 147665 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-143418/.minikube
I1207 22:25:28.284007 147665 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1207 22:25:28.285181 147665 driver.go:422] Setting default libvirt URI to qemu:///system
I1207 22:25:28.298503 147665 out.go:179] * Using the none driver based on user configuration
I1207 22:25:28.299765 147665 start.go:309] selected driver: none
I1207 22:25:28.299779 147665 start.go:927] validating driver "none" against <nil>
I1207 22:25:28.299800 147665 start.go:938] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1207 22:25:28.299841 147665 start.go:1756] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
W1207 22:25:28.300151 147665 out.go:285] ! The 'none' driver does not respect the --memory flag
I1207 22:25:28.300656 147665 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1207 22:25:28.300969 147665 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1207 22:25:28.300999 147665 cni.go:84] Creating CNI manager for ""
I1207 22:25:28.301044 147665 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1207 22:25:28.301053 147665 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1207 22:25:28.301121 147665 start.go:353] cluster config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1207 22:25:28.302348 147665 out.go:179] * Starting "minikube" primary control-plane node in "minikube" cluster
I1207 22:25:28.303630 147665 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/config.json ...
I1207 22:25:28.303670 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/config.json: {Name:mk5e29c640f299af9c87cda17787fc521449c7fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:28.303796 147665 start.go:360] acquireMachinesLock for minikube: {Name:mk46b5f74fc4bf176e53e5157f7a1e6e21aaae8e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1207 22:25:28.303852 147665 start.go:364] duration metric: took 43.207µs to acquireMachinesLock for "minikube"
I1207 22:25:28.303866 147665 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I1207 22:25:28.303947 147665 start.go:125] createHost starting for "" (driver="none")
I1207 22:25:28.305232 147665 out.go:179] * Running on localhost (CPUs=8, Memory=32093MB, Disk=297540MB) ...
I1207 22:25:28.306294 147665 exec_runner.go:51] Run: systemctl --version
I1207 22:25:28.308476 147665 start.go:159] libmachine.API.Create for "minikube" (driver="none")
I1207 22:25:28.308512 147665 client.go:173] LocalClient.Create starting
I1207 22:25:28.308608 147665 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22054-143418/.minikube/certs/ca.pem
I1207 22:25:28.524502 147665 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22054-143418/.minikube/certs/cert.pem
I1207 22:25:28.630604 147665 client.go:176] duration metric: took 322.077749ms to LocalClient.Create
I1207 22:25:28.630646 147665 start.go:167] duration metric: took 322.17056ms to libmachine.API.Create "minikube"
I1207 22:25:28.630653 147665 start.go:293] postStartSetup for "minikube" (driver="none")
I1207 22:25:28.630716 147665 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1207 22:25:28.630753 147665 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1207 22:25:28.642115 147665 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1207 22:25:28.642147 147665 main.go:143] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1207 22:25:28.642155 147665 main.go:143] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1207 22:25:28.644535 147665 out.go:179] * OS release is Ubuntu 22.04.5 LTS
I1207 22:25:28.645828 147665 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-143418/.minikube/addons for local assets ...
I1207 22:25:28.645924 147665 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-143418/.minikube/files for local assets ...
I1207 22:25:28.645956 147665 start.go:296] duration metric: took 15.297408ms for postStartSetup
I1207 22:25:28.646532 147665 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/config.json ...
I1207 22:25:28.646683 147665 start.go:128] duration metric: took 342.726088ms to createHost
I1207 22:25:28.646697 147665 start.go:83] releasing machines lock for "minikube", held for 342.836526ms
I1207 22:25:28.648393 147665 out.go:179] * Found network options:
I1207 22:25:28.649590 147665 out.go:179] - HTTP_PROXY=172.16.1.1:1
W1207 22:25:28.650673 147665 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (10.154.0.4).
I1207 22:25:28.651894 147665 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I1207 22:25:28.653352 147665 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1207 22:25:28.653439 147665 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
W1207 22:25:28.655536 147665 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1207 22:25:28.655611 147665 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1207 22:25:28.667109 147665 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1207 22:25:28.667135 147665 start.go:496] detecting cgroup driver to use...
I1207 22:25:28.667167 147665 detect.go:190] detected "systemd" cgroup driver on host os
I1207 22:25:28.667276 147665 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1207 22:25:28.690971 147665 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1207 22:25:28.701794 147665 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1207 22:25:28.712559 147665 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
I1207 22:25:28.712617 147665 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1207 22:25:28.725324 147665 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1207 22:25:28.738043 147665 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1207 22:25:28.748393 147665 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1207 22:25:28.759717 147665 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1207 22:25:28.769826 147665 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1207 22:25:28.780893 147665 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1207 22:25:28.791238 147665 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1207 22:25:28.804755 147665 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1207 22:25:28.814251 147665 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1207 22:25:28.823444 147665 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1207 22:25:29.053758 147665 exec_runner.go:51] Run: sudo systemctl restart containerd
I1207 22:25:29.166318 147665 start.go:496] detecting cgroup driver to use...
I1207 22:25:29.166369 147665 detect.go:190] detected "systemd" cgroup driver on host os
I1207 22:25:29.166506 147665 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1207 22:25:29.193670 147665 exec_runner.go:51] Run: which cri-dockerd
I1207 22:25:29.195150 147665 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1207 22:25:29.209937 147665 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
I1207 22:25:29.209975 147665 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
I1207 22:25:29.210030 147665 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
I1207 22:25:29.223473 147665 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
I1207 22:25:29.223722 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1509715507 /etc/systemd/system/cri-docker.service.d/10-cni.conf
I1207 22:25:29.235658 147665 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I1207 22:25:29.476346 147665 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I1207 22:25:29.691224 147665 docker.go:575] configuring docker to use "systemd" as cgroup driver...
I1207 22:25:29.691380 147665 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
I1207 22:25:29.691396 147665 exec_runner.go:203] rm: /etc/docker/daemon.json
I1207 22:25:29.691445 147665 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
I1207 22:25:29.701954 147665 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (129 bytes)
I1207 22:25:29.702115 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2079246243 /etc/docker/daemon.json
I1207 22:25:29.711725 147665 exec_runner.go:51] Run: sudo systemctl reset-failed docker
I1207 22:25:29.723138 147665 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1207 22:25:29.940761 147665 exec_runner.go:51] Run: sudo systemctl restart docker
I1207 22:25:32.123967 147665 exec_runner.go:84] Completed: sudo systemctl restart docker: (2.183167593s)
I1207 22:25:32.124048 147665 exec_runner.go:51] Run: sudo systemctl is-active --quiet service docker
I1207 22:25:32.136559 147665 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
I1207 22:25:32.151606 147665 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
I1207 22:25:32.165997 147665 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I1207 22:25:32.178431 147665 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
I1207 22:25:32.394430 147665 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I1207 22:25:32.609373 147665 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1207 22:25:32.821988 147665 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
I1207 22:25:32.847692 147665 exec_runner.go:51] Run: sudo systemctl reset-failed cri-docker.service
I1207 22:25:32.859654 147665 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1207 22:25:33.067850 147665 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
I1207 22:25:33.159570 147665 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I1207 22:25:33.173683 147665 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1207 22:25:33.173752 147665 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I1207 22:25:33.175095 147665 start.go:564] Will wait 60s for crictl version
I1207 22:25:33.175148 147665 exec_runner.go:51] Run: which crictl
I1207 22:25:33.176152 147665 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
I1207 22:25:33.204354 147665 start.go:580] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 29.1.2
RuntimeApiVersion: v1
I1207 22:25:33.204436 147665 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I1207 22:25:33.228449 147665 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I1207 22:25:33.253660 147665 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.2 ...
I1207 22:25:33.254832 147665 out.go:179] - env HTTP_PROXY=172.16.1.1:1
I1207 22:25:33.255940 147665 exec_runner.go:51] Run: grep 127.0.0.1 host.minikube.internal$ /etc/hosts
I1207 22:25:33.258604 147665 out.go:179] - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
I1207 22:25:33.259595 147665 kubeadm.go:884] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1207 22:25:33.259734 147665 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
I1207 22:25:33.259744 147665 kubeadm.go:935] updating node { 10.154.0.4 8443 v1.34.2 docker true true} ...
I1207 22:25:33.259830 147665 kubeadm.go:947] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-9 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.154.0.4 --resolv-conf=/run/systemd/resolve/resolv.conf
[Install]
config:
{KubernetesVersion:v1.34.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
I1207 22:25:33.259874 147665 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
I1207 22:25:33.314885 147665 cni.go:84] Creating CNI manager for ""
I1207 22:25:33.314938 147665 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1207 22:25:33.314958 147665 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1207 22:25:33.314980 147665 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.154.0.4 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-9 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.154.0.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.154.0.4 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1207 22:25:33.315117 147665 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.154.0.4
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ubuntu-20-agent-9"
kubeletExtraArgs:
- name: "node-ip"
value: "10.154.0.4"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "10.154.0.4"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1207 22:25:33.315264 147665 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
I1207 22:25:33.326941 147665 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.2: exit status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.34.2': No such file or directory
Initiating transfer...
I1207 22:25:33.327012 147665 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.2
I1207 22:25:33.336865 147665 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubeadm.sha256
I1207 22:25:33.336869 147665 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubelet.sha256
I1207 22:25:33.337010 147665 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I1207 22:25:33.336869 147665 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
I1207 22:25:33.336947 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/cache/linux/amd64/v1.34.2/kubeadm --> /var/lib/minikube/binaries/v1.34.2/kubeadm (74027192 bytes)
I1207 22:25:33.337147 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/cache/linux/amd64/v1.34.2/kubectl --> /var/lib/minikube/binaries/v1.34.2/kubectl (60559544 bytes)
I1207 22:25:33.354328 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/cache/linux/amd64/v1.34.2/kubelet --> /var/lib/minikube/binaries/v1.34.2/kubelet (59199780 bytes)
I1207 22:25:33.391647 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1535187293 /var/lib/minikube/binaries/v1.34.2/kubeadm
I1207 22:25:33.391955 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1599990728 /var/lib/minikube/binaries/v1.34.2/kubectl
I1207 22:25:33.408667 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2149841023 /var/lib/minikube/binaries/v1.34.2/kubelet
I1207 22:25:33.461217 147665 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1207 22:25:33.471822 147665 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
I1207 22:25:33.471843 147665 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I1207 22:25:33.471885 147665 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I1207 22:25:33.482262 147665 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
I1207 22:25:33.482418 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3414592711 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I1207 22:25:33.492131 147665 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
I1207 22:25:33.492154 147665 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
I1207 22:25:33.492193 147665 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
I1207 22:25:33.501956 147665 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1207 22:25:33.502093 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2085672220 /lib/systemd/system/kubelet.service
I1207 22:25:33.511586 147665 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
I1207 22:25:33.511713 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1694213752 /var/tmp/minikube/kubeadm.yaml.new
I1207 22:25:33.521028 147665 exec_runner.go:51] Run: grep 10.154.0.4 control-plane.minikube.internal$ /etc/hosts
I1207 22:25:33.522472 147665 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1207 22:25:33.746685 147665 exec_runner.go:51] Run: sudo systemctl start kubelet
I1207 22:25:33.772520 147665 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube for IP: 10.154.0.4
I1207 22:25:33.772546 147665 certs.go:195] generating shared ca certs ...
I1207 22:25:33.772569 147665 certs.go:227] acquiring lock for ca certs: {Name:mk756b67f774ebe569237928b30a85bf8fe75494 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:33.772705 147665 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-143418/.minikube/ca.key
I1207 22:25:33.828710 147665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-143418/.minikube/ca.crt ...
I1207 22:25:33.828741 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/.minikube/ca.crt: {Name:mkf707e982cd1975cdfed9bd0f88a51bd4bb2311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:33.828932 147665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-143418/.minikube/ca.key ...
I1207 22:25:33.828945 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/.minikube/ca.key: {Name:mkf1cfbdbfeb609bcad88fa256b2b720e4e484cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:33.829023 147665 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-143418/.minikube/proxy-client-ca.key
I1207 22:25:34.037102 147665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-143418/.minikube/proxy-client-ca.crt ...
I1207 22:25:34.037144 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/.minikube/proxy-client-ca.crt: {Name:mk7b713d47de91a7370d2a3743da5cf0cb626f72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:34.037362 147665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-143418/.minikube/proxy-client-ca.key ...
I1207 22:25:34.037379 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/.minikube/proxy-client-ca.key: {Name:mk0a4ac79bec3e1fc8cd773ed7ef4c7a6a4ba544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:34.037486 147665 certs.go:257] generating profile certs ...
I1207 22:25:34.037569 147665 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.key
I1207 22:25:34.037590 147665 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.crt with IP's: []
I1207 22:25:34.110750 147665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.crt ...
I1207 22:25:34.110787 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.crt: {Name:mk59b88b3b2c4d0b9807a7cb184b72d53eefdf84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:34.111001 147665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.key ...
I1207 22:25:34.111021 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.key: {Name:mk18739e8c5dd7f2c8826f3c1110af35fd754287 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:34.111140 147665 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.key.1b9420d6
I1207 22:25:34.111167 147665 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.crt.1b9420d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.154.0.4]
I1207 22:25:34.312674 147665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.crt.1b9420d6 ...
I1207 22:25:34.312709 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.crt.1b9420d6: {Name:mk4f6c1b545e8ae337a7ede0b19579897cc42222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:34.312926 147665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.key.1b9420d6 ...
I1207 22:25:34.312946 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.key.1b9420d6: {Name:mkd8f9b36319cb05e8e64578162229dc1eb249bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:34.313065 147665 certs.go:382] copying /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.crt.1b9420d6 -> /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.crt
I1207 22:25:34.313172 147665 certs.go:386] copying /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.key.1b9420d6 -> /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.key
I1207 22:25:34.313251 147665 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/proxy-client.key
I1207 22:25:34.313277 147665 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I1207 22:25:34.345482 147665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/proxy-client.crt ...
I1207 22:25:34.345508 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/proxy-client.crt: {Name:mk0d9927fe9e0c49ca4dc5fb1acde53600af5bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:34.345696 147665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/proxy-client.key ...
I1207 22:25:34.345713 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/proxy-client.key: {Name:mk0403b293b12bc4e987308ab843b011900ffaff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:34.345922 147665 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-143418/.minikube/certs/ca-key.pem (1675 bytes)
I1207 22:25:34.345988 147665 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-143418/.minikube/certs/ca.pem (1082 bytes)
I1207 22:25:34.346031 147665 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-143418/.minikube/certs/cert.pem (1123 bytes)
I1207 22:25:34.346074 147665 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-143418/.minikube/certs/key.pem (1675 bytes)
I1207 22:25:34.346773 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1207 22:25:34.346981 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2368941701 /var/lib/minikube/certs/ca.crt
I1207 22:25:34.358419 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1207 22:25:34.358562 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1349169189 /var/lib/minikube/certs/ca.key
I1207 22:25:34.369161 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1207 22:25:34.369303 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube323379477 /var/lib/minikube/certs/proxy-client-ca.crt
I1207 22:25:34.379877 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1207 22:25:34.380021 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube934755363 /var/lib/minikube/certs/proxy-client-ca.key
I1207 22:25:34.390042 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
I1207 22:25:34.390214 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3702837495 /var/lib/minikube/certs/apiserver.crt
I1207 22:25:34.400239 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1207 22:25:34.400406 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1240425133 /var/lib/minikube/certs/apiserver.key
I1207 22:25:34.410990 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1207 22:25:34.411151 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2613394794 /var/lib/minikube/certs/proxy-client.crt
I1207 22:25:34.421608 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1207 22:25:34.421749 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1094857425 /var/lib/minikube/certs/proxy-client.key
I1207 22:25:34.431942 147665 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
I1207 22:25:34.431962 147665 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
I1207 22:25:34.432007 147665 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
I1207 22:25:34.441329 147665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-143418/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1207 22:25:34.441505 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4154313566 /usr/share/ca-certificates/minikubeCA.pem
I1207 22:25:34.451360 147665 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1207 22:25:34.451496 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube186732915 /var/lib/minikube/kubeconfig
I1207 22:25:34.461695 147665 exec_runner.go:51] Run: openssl version
I1207 22:25:34.464627 147665 exec_runner.go:51] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1207 22:25:34.474313 147665 exec_runner.go:51] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1207 22:25:34.483927 147665 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1207 22:25:34.485829 147665 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Dec 7 22:25 /usr/share/ca-certificates/minikubeCA.pem
I1207 22:25:34.485870 147665 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1207 22:25:34.490624 147665 exec_runner.go:51] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1207 22:25:34.499940 147665 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1207 22:25:34.501363 147665 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1207 22:25:34.501433 147665 kubeadm.go:401] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1207 22:25:34.501542 147665 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1207 22:25:34.519219 147665 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1207 22:25:34.530018 147665 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1207 22:25:34.539892 147665 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I1207 22:25:34.562810 147665 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1207 22:25:34.573726 147665 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1207 22:25:34.573748 147665 kubeadm.go:158] found existing configuration files:
I1207 22:25:34.573793 147665 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1207 22:25:34.583843 147665 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1207 22:25:34.583928 147665 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
I1207 22:25:34.594068 147665 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1207 22:25:34.605246 147665 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1207 22:25:34.605322 147665 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1207 22:25:34.616778 147665 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1207 22:25:34.628443 147665 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1207 22:25:34.628500 147665 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1207 22:25:34.637912 147665 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1207 22:25:34.647162 147665 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1207 22:25:34.647231 147665 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1207 22:25:34.656360 147665 exec_runner.go:97] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1207 22:25:34.694064 147665 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
I1207 22:25:34.694566 147665 kubeadm.go:319] [preflight] Running pre-flight checks
I1207 22:25:34.777808 147665 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1207 22:25:34.777997 147665 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1207 22:25:34.778019 147665 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1207 22:25:34.778024 147665 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1207 22:25:43.664598 147665 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1207 22:25:43.668022 147665 out.go:252] - Generating certificates and keys ...
I1207 22:25:43.668078 147665 kubeadm.go:319] [certs] Using existing ca certificate authority
I1207 22:25:43.668090 147665 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1207 22:25:44.077096 147665 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1207 22:25:44.384482 147665 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1207 22:25:44.890987 147665 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1207 22:25:45.188777 147665 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1207 22:25:45.430525 147665 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1207 22:25:45.430585 147665 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-9] and IPs [10.154.0.4 127.0.0.1 ::1]
I1207 22:25:45.748596 147665 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1207 22:25:45.748645 147665 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-9] and IPs [10.154.0.4 127.0.0.1 ::1]
I1207 22:25:46.106344 147665 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1207 22:25:46.338752 147665 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1207 22:25:46.393314 147665 kubeadm.go:319] [certs] Generating "sa" key and public key
I1207 22:25:46.393464 147665 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1207 22:25:46.448625 147665 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1207 22:25:46.540275 147665 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1207 22:25:46.624864 147665 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1207 22:25:46.727521 147665 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1207 22:25:46.894940 147665 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1207 22:25:46.895564 147665 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1207 22:25:46.897640 147665 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1207 22:25:46.899990 147665 out.go:252] - Booting up control plane ...
I1207 22:25:46.900017 147665 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1207 22:25:46.900032 147665 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1207 22:25:46.900485 147665 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1207 22:25:46.914344 147665 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1207 22:25:46.914373 147665 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1207 22:25:46.919027 147665 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1207 22:25:46.919335 147665 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1207 22:25:46.919367 147665 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1207 22:25:47.178057 147665 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1207 22:25:47.178087 147665 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1207 22:25:47.678392 147665 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.054489ms
I1207 22:25:47.682894 147665 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1207 22:25:47.682938 147665 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://10.154.0.4:8443/livez
I1207 22:25:47.682954 147665 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1207 22:25:47.682961 147665 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1207 22:25:51.111763 147665 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.428722009s
I1207 22:25:51.126658 147665 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.443646836s
I1207 22:25:52.684475 147665 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001439698s
I1207 22:25:52.700728 147665 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1207 22:25:52.710399 147665 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1207 22:25:52.719383 147665 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1207 22:25:52.719409 147665 kubeadm.go:319] [mark-control-plane] Marking the node ubuntu-20-agent-9 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1207 22:25:52.727186 147665 kubeadm.go:319] [bootstrap-token] Using token: e12c2p.5g90k451rleqyeb6
I1207 22:25:52.728404 147665 out.go:252] - Configuring RBAC rules ...
I1207 22:25:52.728435 147665 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1207 22:25:52.731303 147665 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1207 22:25:52.736035 147665 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1207 22:25:52.738184 147665 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1207 22:25:52.741476 147665 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1207 22:25:52.743701 147665 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1207 22:25:53.090726 147665 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1207 22:25:53.509242 147665 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1207 22:25:54.090613 147665 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1207 22:25:54.091468 147665 kubeadm.go:319]
I1207 22:25:54.091482 147665 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1207 22:25:54.091486 147665 kubeadm.go:319]
I1207 22:25:54.091492 147665 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1207 22:25:54.091495 147665 kubeadm.go:319]
I1207 22:25:54.091498 147665 kubeadm.go:319] mkdir -p $HOME/.kube
I1207 22:25:54.091502 147665 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1207 22:25:54.091506 147665 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1207 22:25:54.091509 147665 kubeadm.go:319]
I1207 22:25:54.091512 147665 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1207 22:25:54.091515 147665 kubeadm.go:319]
I1207 22:25:54.091519 147665 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1207 22:25:54.091521 147665 kubeadm.go:319]
I1207 22:25:54.091532 147665 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1207 22:25:54.091536 147665 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1207 22:25:54.091540 147665 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1207 22:25:54.091544 147665 kubeadm.go:319]
I1207 22:25:54.091548 147665 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1207 22:25:54.091553 147665 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1207 22:25:54.091557 147665 kubeadm.go:319]
I1207 22:25:54.091561 147665 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token e12c2p.5g90k451rleqyeb6 \
I1207 22:25:54.091566 147665 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:087e447345576a980037f7d5af4da5db4a36d22d65539224452920fd9c24b719 \
I1207 22:25:54.091570 147665 kubeadm.go:319] --control-plane
I1207 22:25:54.091574 147665 kubeadm.go:319]
I1207 22:25:54.091581 147665 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1207 22:25:54.091584 147665 kubeadm.go:319]
I1207 22:25:54.091589 147665 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token e12c2p.5g90k451rleqyeb6 \
I1207 22:25:54.091594 147665 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:087e447345576a980037f7d5af4da5db4a36d22d65539224452920fd9c24b719
I1207 22:25:54.094795 147665 cni.go:84] Creating CNI manager for ""
I1207 22:25:54.094820 147665 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1207 22:25:54.096570 147665 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1207 22:25:54.097852 147665 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
I1207 22:25:54.109977 147665 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (494 bytes)
I1207 22:25:54.110150 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2558285054 /etc/cni/net.d/1-k8s.conflist
I1207 22:25:54.124731 147665 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1207 22:25:54.124831 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1207 22:25:54.124868 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-9 minikube.k8s.io/updated_at=2025_12_07T22_25_54_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
I1207 22:25:54.134872 147665 ops.go:34] apiserver oom_adj: -16
I1207 22:25:54.196114 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1207 22:25:54.696313 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1207 22:25:55.197017 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1207 22:25:55.696362 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1207 22:25:56.197035 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1207 22:25:56.697094 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1207 22:25:57.196176 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1207 22:25:57.696461 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1207 22:25:58.196781 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1207 22:25:58.696242 147665 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1207 22:25:58.763673 147665 kubeadm.go:1114] duration metric: took 4.63892707s to wait for elevateKubeSystemPrivileges
I1207 22:25:58.763715 147665 kubeadm.go:403] duration metric: took 24.262316876s to StartCluster
I1207 22:25:58.763737 147665 settings.go:142] acquiring lock: {Name:mkbfc722e6671966de34364f67dfb7d69b6080e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:58.763840 147665 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22054-143418/kubeconfig
I1207 22:25:58.764193 147665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-143418/kubeconfig: {Name:mkf50b93a3b2c6268054a4d9269b7d6c1599cc6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1207 22:25:58.764378 147665 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1207 22:25:58.764423 147665 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1207 22:25:58.764550 147665 addons.go:70] Setting storage-provisioner=true in profile "minikube"
I1207 22:25:58.764581 147665 addons.go:239] Setting addon storage-provisioner=true in "minikube"
I1207 22:25:58.764614 147665 host.go:66] Checking if "minikube" exists ...
I1207 22:25:58.764617 147665 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1207 22:25:58.764550 147665 addons.go:70] Setting default-storageclass=true in profile "minikube"
I1207 22:25:58.764680 147665 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I1207 22:25:58.765484 147665 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I1207 22:25:58.765509 147665 api_server.go:166] Checking apiserver status ...
I1207 22:25:58.765539 147665 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1207 22:25:58.765881 147665 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I1207 22:25:58.765924 147665 api_server.go:166] Checking apiserver status ...
I1207 22:25:58.765962 147665 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1207 22:25:58.766111 147665 out.go:179] * Configuring local host environment ...
W1207 22:25:58.767264 147665 out.go:285] *
W1207 22:25:58.767284 147665 out.go:285] ! The 'none' driver is designed for experts who need to integrate with an existing VM
W1207 22:25:58.767291 147665 out.go:285] * Most users should use the newer 'docker' driver instead, which does not require root!
W1207 22:25:58.767299 147665 out.go:285] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
W1207 22:25:58.767309 147665 out.go:285] *
W1207 22:25:58.767350 147665 out.go:285] ! kubectl and minikube configuration will be stored in /home/jenkins
W1207 22:25:58.767358 147665 out.go:285] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
W1207 22:25:58.767365 147665 out.go:285] *
W1207 22:25:58.767387 147665 out.go:285] - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
W1207 22:25:58.767394 147665 out.go:285] - sudo chown -R $USER $HOME/.kube $HOME/.minikube
W1207 22:25:58.767401 147665 out.go:285] *
W1207 22:25:58.767407 147665 out.go:285] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
I1207 22:25:58.767435 147665 start.go:236] Will wait 6m0s for node &{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I1207 22:25:58.768865 147665 out.go:179] * Verifying Kubernetes components...
I1207 22:25:58.770287 147665 exec_runner.go:51] Run: sudo systemctl daemon-reload
I1207 22:25:58.786287 147665 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/149457/cgroup
I1207 22:25:58.786343 147665 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/149457/cgroup
W1207 22:25:58.800849 147665 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/149457/cgroup: exit status 1
stdout:
stderr:
I1207 22:25:58.800936 147665 exec_runner.go:51] Run: ls
W1207 22:25:58.800987 147665 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/149457/cgroup: exit status 1
stdout:
stderr:
I1207 22:25:58.801040 147665 exec_runner.go:51] Run: ls
I1207 22:25:58.802734 147665 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I1207 22:25:58.802788 147665 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I1207 22:25:58.811282 147665 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I1207 22:25:58.811515 147665 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I1207 22:25:58.812153 147665 kapi.go:59] client config for minikube: &rest.Config{Host:"https://10.154.0.4:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.key", CAFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1207 22:25:58.812757 147665 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1207 22:25:58.812780 147665 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1207 22:25:58.812788 147665 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1207 22:25:58.812793 147665 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1207 22:25:58.812800 147665 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1207 22:25:58.812939 147665 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1207 22:25:58.813314 147665 addons.go:239] Setting addon default-storageclass=true in "minikube"
I1207 22:25:58.813355 147665 host.go:66] Checking if "minikube" exists ...
I1207 22:25:58.814060 147665 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I1207 22:25:58.814081 147665 api_server.go:166] Checking apiserver status ...
I1207 22:25:58.814120 147665 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1207 22:25:58.814297 147665 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1207 22:25:58.814337 147665 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1207 22:25:58.814490 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube127227585 /etc/kubernetes/addons/storage-provisioner.yaml
I1207 22:25:58.828214 147665 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1207 22:25:58.838182 147665 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/149457/cgroup
W1207 22:25:58.856702 147665 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/149457/cgroup: exit status 1
stdout:
stderr:
I1207 22:25:58.856864 147665 exec_runner.go:51] Run: ls
I1207 22:25:58.859626 147665 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1207 22:25:58.867350 147665 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I1207 22:25:58.874895 147665 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I1207 22:25:58.874961 147665 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1207 22:25:58.874984 147665 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1207 22:25:58.875149 147665 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3146531359 /etc/kubernetes/addons/storageclass.yaml
I1207 22:25:58.896595 147665 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1207 22:25:59.077178 147665 start.go:977] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
I1207 22:25:59.077952 147665 kapi.go:59] client config for minikube: &rest.Config{Host:"https://10.154.0.4:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.key", CAFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1207 22:25:59.113547 147665 exec_runner.go:51] Run: sudo systemctl start kubelet
I1207 22:25:59.134356 147665 kapi.go:59] client config for minikube: &rest.Config{Host:"https://10.154.0.4:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.key", CAFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1207 22:25:59.134660 147665 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-9" to be "Ready" ...
I1207 22:25:59.137031 147665 node_ready.go:49] node "ubuntu-20-agent-9" is "Ready"
I1207 22:25:59.137055 147665 node_ready.go:38] duration metric: took 2.373807ms for node "ubuntu-20-agent-9" to be "Ready" ...
I1207 22:25:59.137070 147665 api_server.go:52] waiting for apiserver process to appear ...
I1207 22:25:59.137113 147665 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1207 22:25:59.154311 147665 api_server.go:72] duration metric: took 386.825267ms to wait for apiserver process to appear ...
I1207 22:25:59.154345 147665 api_server.go:88] waiting for apiserver healthz status ...
I1207 22:25:59.154368 147665 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I1207 22:25:59.158931 147665 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I1207 22:25:59.159832 147665 api_server.go:141] control plane version: v1.34.2
I1207 22:25:59.159860 147665 api_server.go:131] duration metric: took 5.507193ms to wait for apiserver health ...
I1207 22:25:59.159870 147665 system_pods.go:43] waiting for kube-system pods to appear ...
I1207 22:25:59.162913 147665 system_pods.go:59] 5 kube-system pods found
I1207 22:25:59.162961 147665 system_pods.go:61] "etcd-ubuntu-20-agent-9" [984fb506-2b95-4fd2-a3f6-90bab91673d6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1207 22:25:59.162977 147665 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-9" [a51c8055-c56e-4328-b3ef-008eb04dc72e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1207 22:25:59.162986 147665 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-9" [bbb55451-ce2b-45f3-8da0-9400f5b4922d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1207 22:25:59.162998 147665 system_pods.go:61] "kube-proxy-9wczh" [6fa0d934-a836-42d2-b765-5ec6a1604cb1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1207 22:25:59.163006 147665 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-9" [ec68de60-e8ac-4eef-98e6-13e0c3e44169] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1207 22:25:59.163013 147665 system_pods.go:74] duration metric: took 3.06288ms to wait for pod list to return data ...
I1207 22:25:59.163023 147665 default_sa.go:34] waiting for default service account to be created ...
I1207 22:25:59.165017 147665 default_sa.go:45] found service account: "default"
I1207 22:25:59.165049 147665 default_sa.go:55] duration metric: took 2.019427ms for default service account to be created ...
I1207 22:25:59.165062 147665 system_pods.go:116] waiting for k8s-apps to be running ...
I1207 22:25:59.192128 147665 system_pods.go:86] 6 kube-system pods found
I1207 22:25:59.192165 147665 system_pods.go:89] "etcd-ubuntu-20-agent-9" [984fb506-2b95-4fd2-a3f6-90bab91673d6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1207 22:25:59.192178 147665 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-9" [a51c8055-c56e-4328-b3ef-008eb04dc72e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1207 22:25:59.192188 147665 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-9" [bbb55451-ce2b-45f3-8da0-9400f5b4922d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1207 22:25:59.192198 147665 system_pods.go:89] "kube-proxy-9wczh" [6fa0d934-a836-42d2-b765-5ec6a1604cb1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1207 22:25:59.192208 147665 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-9" [ec68de60-e8ac-4eef-98e6-13e0c3e44169] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1207 22:25:59.192216 147665 system_pods.go:89] "storage-provisioner" [1fca3b52-fea6-4632-a2dd-edabd44d6fa7] Pending
I1207 22:25:59.192266 147665 retry.go:31] will retry after 269.585449ms: missing components: kube-dns, kube-proxy
I1207 22:25:59.200487 147665 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
I1207 22:25:59.202243 147665 addons.go:530] duration metric: took 437.831066ms for enable addons: enabled=[default-storageclass storage-provisioner]
I1207 22:25:59.466622 147665 system_pods.go:86] 8 kube-system pods found
I1207 22:25:59.466663 147665 system_pods.go:89] "coredns-66bc5c9577-2tjgd" [51678b09-97ed-4f1b-86a2-6bf589b0df9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1207 22:25:59.466676 147665 system_pods.go:89] "coredns-66bc5c9577-kt4qp" [18a87cdb-7c0e-44bc-a68d-97c9c261e65c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1207 22:25:59.466694 147665 system_pods.go:89] "etcd-ubuntu-20-agent-9" [984fb506-2b95-4fd2-a3f6-90bab91673d6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1207 22:25:59.466703 147665 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-9" [a51c8055-c56e-4328-b3ef-008eb04dc72e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1207 22:25:59.466711 147665 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-9" [bbb55451-ce2b-45f3-8da0-9400f5b4922d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1207 22:25:59.466720 147665 system_pods.go:89] "kube-proxy-9wczh" [6fa0d934-a836-42d2-b765-5ec6a1604cb1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1207 22:25:59.466729 147665 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-9" [ec68de60-e8ac-4eef-98e6-13e0c3e44169] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1207 22:25:59.466736 147665 system_pods.go:89] "storage-provisioner" [1fca3b52-fea6-4632-a2dd-edabd44d6fa7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1207 22:25:59.466756 147665 retry.go:31] will retry after 244.246793ms: missing components: kube-dns, kube-proxy
I1207 22:25:59.581065 147665 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
I1207 22:25:59.714712 147665 system_pods.go:86] 8 kube-system pods found
I1207 22:25:59.714749 147665 system_pods.go:89] "coredns-66bc5c9577-2tjgd" [51678b09-97ed-4f1b-86a2-6bf589b0df9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1207 22:25:59.714759 147665 system_pods.go:89] "coredns-66bc5c9577-kt4qp" [18a87cdb-7c0e-44bc-a68d-97c9c261e65c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1207 22:25:59.714769 147665 system_pods.go:89] "etcd-ubuntu-20-agent-9" [984fb506-2b95-4fd2-a3f6-90bab91673d6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1207 22:25:59.714780 147665 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-9" [a51c8055-c56e-4328-b3ef-008eb04dc72e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1207 22:25:59.714788 147665 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-9" [bbb55451-ce2b-45f3-8da0-9400f5b4922d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1207 22:25:59.714801 147665 system_pods.go:89] "kube-proxy-9wczh" [6fa0d934-a836-42d2-b765-5ec6a1604cb1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1207 22:25:59.714812 147665 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-9" [ec68de60-e8ac-4eef-98e6-13e0c3e44169] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1207 22:25:59.714823 147665 system_pods.go:89] "storage-provisioner" [1fca3b52-fea6-4632-a2dd-edabd44d6fa7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1207 22:25:59.714846 147665 retry.go:31] will retry after 388.623519ms: missing components: kube-dns, kube-proxy
I1207 22:26:00.108562 147665 system_pods.go:86] 8 kube-system pods found
I1207 22:26:00.108603 147665 system_pods.go:89] "coredns-66bc5c9577-2tjgd" [51678b09-97ed-4f1b-86a2-6bf589b0df9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1207 22:26:00.108615 147665 system_pods.go:89] "coredns-66bc5c9577-kt4qp" [18a87cdb-7c0e-44bc-a68d-97c9c261e65c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1207 22:26:00.108623 147665 system_pods.go:89] "etcd-ubuntu-20-agent-9" [984fb506-2b95-4fd2-a3f6-90bab91673d6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1207 22:26:00.108637 147665 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-9" [a51c8055-c56e-4328-b3ef-008eb04dc72e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1207 22:26:00.108645 147665 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-9" [bbb55451-ce2b-45f3-8da0-9400f5b4922d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1207 22:26:00.108659 147665 system_pods.go:89] "kube-proxy-9wczh" [6fa0d934-a836-42d2-b765-5ec6a1604cb1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1207 22:26:00.108667 147665 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-9" [ec68de60-e8ac-4eef-98e6-13e0c3e44169] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1207 22:26:00.108688 147665 system_pods.go:89] "storage-provisioner" [1fca3b52-fea6-4632-a2dd-edabd44d6fa7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1207 22:26:00.108709 147665 retry.go:31] will retry after 497.4269ms: missing components: kube-dns, kube-proxy
I1207 22:26:00.610116 147665 system_pods.go:86] 8 kube-system pods found
I1207 22:26:00.610151 147665 system_pods.go:89] "coredns-66bc5c9577-2tjgd" [51678b09-97ed-4f1b-86a2-6bf589b0df9c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1207 22:26:00.610162 147665 system_pods.go:89] "coredns-66bc5c9577-kt4qp" [18a87cdb-7c0e-44bc-a68d-97c9c261e65c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1207 22:26:00.610170 147665 system_pods.go:89] "etcd-ubuntu-20-agent-9" [984fb506-2b95-4fd2-a3f6-90bab91673d6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1207 22:26:00.610180 147665 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-9" [a51c8055-c56e-4328-b3ef-008eb04dc72e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1207 22:26:00.610189 147665 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-9" [bbb55451-ce2b-45f3-8da0-9400f5b4922d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1207 22:26:00.610198 147665 system_pods.go:89] "kube-proxy-9wczh" [6fa0d934-a836-42d2-b765-5ec6a1604cb1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1207 22:26:00.610206 147665 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-9" [ec68de60-e8ac-4eef-98e6-13e0c3e44169] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1207 22:26:00.610219 147665 system_pods.go:89] "storage-provisioner" [1fca3b52-fea6-4632-a2dd-edabd44d6fa7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1207 22:26:00.610243 147665 retry.go:31] will retry after 500.774576ms: missing components: kube-dns, kube-proxy
I1207 22:26:01.115352 147665 system_pods.go:86] 7 kube-system pods found
I1207 22:26:01.115414 147665 system_pods.go:89] "coredns-66bc5c9577-kt4qp" [18a87cdb-7c0e-44bc-a68d-97c9c261e65c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1207 22:26:01.115423 147665 system_pods.go:89] "etcd-ubuntu-20-agent-9" [984fb506-2b95-4fd2-a3f6-90bab91673d6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1207 22:26:01.115431 147665 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-9" [a51c8055-c56e-4328-b3ef-008eb04dc72e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1207 22:26:01.115436 147665 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-9" [bbb55451-ce2b-45f3-8da0-9400f5b4922d] Running
I1207 22:26:01.115440 147665 system_pods.go:89] "kube-proxy-9wczh" [6fa0d934-a836-42d2-b765-5ec6a1604cb1] Running
I1207 22:26:01.115445 147665 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-9" [ec68de60-e8ac-4eef-98e6-13e0c3e44169] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1207 22:26:01.115454 147665 system_pods.go:89] "storage-provisioner" [1fca3b52-fea6-4632-a2dd-edabd44d6fa7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1207 22:26:01.115461 147665 system_pods.go:126] duration metric: took 1.950393197s to wait for k8s-apps to be running ...
I1207 22:26:01.115473 147665 system_svc.go:44] waiting for kubelet service to be running ....
I1207 22:26:01.115526 147665 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I1207 22:26:01.132035 147665 system_svc.go:56] duration metric: took 16.547928ms WaitForService to wait for kubelet
I1207 22:26:01.132070 147665 kubeadm.go:587] duration metric: took 2.364598088s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1207 22:26:01.132099 147665 node_conditions.go:102] verifying NodePressure condition ...
I1207 22:26:01.135428 147665 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1207 22:26:01.135467 147665 node_conditions.go:123] node cpu capacity is 8
I1207 22:26:01.135488 147665 node_conditions.go:105] duration metric: took 3.381382ms to run NodePressure ...
I1207 22:26:01.135503 147665 start.go:242] waiting for startup goroutines ...
I1207 22:26:01.135513 147665 start.go:247] waiting for cluster config update ...
I1207 22:26:01.135528 147665 start.go:256] writing updated cluster config ...
I1207 22:26:01.135826 147665 exec_runner.go:51] Run: rm -f paused
I1207 22:26:01.137161 147665 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1207 22:26:01.137654 147665 kapi.go:59] client config for minikube: &rest.Config{Host:"https://10.154.0.4:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/profiles/minikube/client.key", CAFile:"/home/jenkins/minikube-integration/22054-143418/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, U
serAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1207 22:26:01.140418 147665 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kt4qp" in "kube-system" namespace to be "Ready" or be gone ...
W1207 22:26:03.145479 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:05.146328 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:07.645928 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:10.145567 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:12.145642 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:14.146051 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:16.646154 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:19.145802 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:21.146106 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:23.645962 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:26.146240 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:28.645806 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:31.146616 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:33.645657 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:35.646276 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:38.145400 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:40.146482 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:42.646386 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:45.145288 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:47.145355 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:49.145613 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:51.146395 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:53.647273 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:56.146009 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:26:58.645667 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:01.145850 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:03.644786 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:05.646053 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:08.146382 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:10.645703 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:13.145777 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:15.146059 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:17.146362 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:19.646192 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:21.646418 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:24.145676 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:26.146427 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:28.645480 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:30.645736 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:32.646207 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:35.145992 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:37.146073 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:39.646241 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:42.145798 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:44.146334 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:46.646086 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:49.145008 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:51.145856 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:53.146183 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:55.645418 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:27:57.645945 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:00.146600 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:02.645866 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:05.146286 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:07.646061 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:10.144873 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:12.145411 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:14.146028 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:16.146493 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:18.645538 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:20.645773 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:23.146055 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:25.645551 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:27.646010 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:30.146619 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:32.645420 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:34.645609 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:37.146641 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:39.645872 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:42.145657 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:44.145988 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:46.646439 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:49.145310 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:51.145432 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:53.145819 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:55.645369 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:28:57.646456 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:00.146228 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:02.645463 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:05.145471 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:07.145999 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:09.646135 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:12.145428 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:14.146058 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:16.646259 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:18.646411 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:21.146455 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:23.646148 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:26.145742 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:28.145869 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:30.645962 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:32.646402 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:35.145955 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:37.146167 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:39.646215 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:42.145229 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:44.145793 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:46.645956 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:48.646453 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:51.145356 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:53.145876 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:55.645179 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:57.645364 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
W1207 22:29:59.646412 147665 pod_ready.go:104] pod "coredns-66bc5c9577-kt4qp" is not "Ready", error: <nil>
I1207 22:30:01.137864 147665 pod_ready.go:86] duration metric: took 3m59.997404779s for pod "coredns-66bc5c9577-kt4qp" in "kube-system" namespace to be "Ready" or be gone ...
W1207 22:30:01.137914 147665 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-dns" label are "Ready", will retry: waitPodCondition: context deadline exceeded
I1207 22:30:01.137933 147665 pod_ready.go:40] duration metric: took 4m0.00073426s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1207 22:30:01.140396 147665 out.go:203]
W1207 22:30:01.141740 147665 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
I1207 22:30:01.142814 147665 out.go:203]
==> Docker <==
Dec 07 22:25:36 ubuntu-20-agent-9 cri-dockerd[148232]: time="2025-12-07T22:25:36Z" level=info msg="Stop pulling image registry.k8s.io/kube-apiserver:v1.34.2: Status: Downloaded newer image for registry.k8s.io/kube-apiserver:v1.34.2"
Dec 07 22:25:38 ubuntu-20-agent-9 cri-dockerd[148232]: time="2025-12-07T22:25:38Z" level=info msg="Stop pulling image registry.k8s.io/kube-controller-manager:v1.34.2: Status: Downloaded newer image for registry.k8s.io/kube-controller-manager:v1.34.2"
Dec 07 22:25:39 ubuntu-20-agent-9 cri-dockerd[148232]: time="2025-12-07T22:25:39Z" level=info msg="Stop pulling image registry.k8s.io/kube-scheduler:v1.34.2: Status: Downloaded newer image for registry.k8s.io/kube-scheduler:v1.34.2"
Dec 07 22:25:40 ubuntu-20-agent-9 cri-dockerd[148232]: time="2025-12-07T22:25:40Z" level=info msg="Stop pulling image registry.k8s.io/kube-proxy:v1.34.2: Status: Downloaded newer image for registry.k8s.io/kube-proxy:v1.34.2"
Dec 07 22:25:41 ubuntu-20-agent-9 cri-dockerd[148232]: time="2025-12-07T22:25:41Z" level=info msg="Stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: Status: Downloaded newer image for registry.k8s.io/coredns/coredns:v1.12.1"
Dec 07 22:25:42 ubuntu-20-agent-9 cri-dockerd[148232]: time="2025-12-07T22:25:42Z" level=info msg="Stop pulling image registry.k8s.io/pause:3.10.1: Status: Downloaded newer image for registry.k8s.io/pause:3.10.1"
Dec 07 22:25:43 ubuntu-20-agent-9 cri-dockerd[148232]: time="2025-12-07T22:25:43Z" level=info msg="Stop pulling image registry.k8s.io/etcd:3.6.5-0: Status: Downloaded newer image for registry.k8s.io/etcd:3.6.5-0"
Dec 07 22:25:49 ubuntu-20-agent-9 dockerd[147882]: time="2025-12-07T22:25:49.520680264Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=3f0b209182aa ep=k8s_POD_kube-apiserver-ubuntu-20-agent-9_kube-system_1e7a0c6cbe7dcf72abb842f35540758a_0 net=host nid=facef7ad65e0
Dec 07 22:25:49 ubuntu-20-agent-9 dockerd[147882]: time="2025-12-07T22:25:49.522555370Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=200dc33735fe ep=k8s_POD_etcd-ubuntu-20-agent-9_kube-system_2396e1d47c0c947f6a40f77e244026a1_0 net=host nid=facef7ad65e0
Dec 07 22:25:49 ubuntu-20-agent-9 dockerd[147882]: time="2025-12-07T22:25:49.527187977Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=7507cef0824b ep=k8s_POD_kube-controller-manager-ubuntu-20-agent-9_kube-system_bbd2353b46295316706003102449676a_0 net=host nid=facef7ad65e0
Dec 07 22:25:49 ubuntu-20-agent-9 dockerd[147882]: time="2025-12-07T22:25:49.530513865Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=3d7f6522f9e3 ep=k8s_POD_kube-scheduler-ubuntu-20-agent-9_kube-system_16d952d22a1642b91be6ee3474ad1514_0 net=host nid=facef7ad65e0
Dec 07 22:25:49 ubuntu-20-agent-9 cri-dockerd[148232]: time="2025-12-07T22:25:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e68148de7219f913ad7d1c6dae6c4e145e504a6c97b66884c0beaca5637cede5/resolv.conf as [nameserver 169.254.169.254 nameserver 169.254.169.254 search europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
Dec 07 22:25:49 ubuntu-20-agent-9 cri-dockerd[148232]: time="2025-12-07T22:25:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/299b8799f4fc0a9471b5817c964adc26086e2afda1966c71774850235eea64e9/resolv.conf as [nameserver 169.254.169.254 nameserver 169.254.169.254 search europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
Dec 07 22:25:49 ubuntu-20-agent-9 cri-dockerd[148232]: time="2025-12-07T22:25:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eeaf5b1438f0b49b5bde742bf7bd81a92fd6dffaddda488113482df92e92d46b/resolv.conf as [nameserver 169.254.169.254 nameserver 169.254.169.254 search europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
Dec 07 22:25:49 ubuntu-20-agent-9 cri-dockerd[148232]: time="2025-12-07T22:25:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8e659fa49e27948af27e4a2e76c70f33902200615190b61af926758e04c5df69/resolv.conf as [nameserver 169.254.169.254 nameserver 169.254.169.254 search europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
Dec 07 22:25:59 ubuntu-20-agent-9 dockerd[147882]: time="2025-12-07T22:25:59.635530153Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=4dc880cc660e ep=k8s_POD_storage-provisioner_kube-system_1fca3b52-fea6-4632-a2dd-edabd44d6fa7_0 net=host nid=facef7ad65e0
Dec 07 22:25:59 ubuntu-20-agent-9 cri-dockerd[148232]: time="2025-12-07T22:25:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/60fa2038780864de592ca140d8bf4a234d69504055474d12c0e55c6cd0c5c918/resolv.conf as [nameserver 169.254.169.254 nameserver 169.254.169.254 search europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
Dec 07 22:25:59 ubuntu-20-agent-9 dockerd[147882]: time="2025-12-07T22:25:59.668302371Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=36dfff44ac6e ep=k8s_POD_coredns-66bc5c9577-kt4qp_kube-system_18a87cdb-7c0e-44bc-a68d-97c9c261e65c_0 net=none nid=9a05f5a43009
Dec 07 22:25:59 ubuntu-20-agent-9 cri-dockerd[148232]: time="2025-12-07T22:25:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e714ad346029c1fb4c2898fca1002dd1671bffc45c66ca48abbdf7004516d80e/resolv.conf as [nameserver 169.254.169.254 nameserver 169.254.169.254 search europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
Dec 07 22:25:59 ubuntu-20-agent-9 dockerd[147882]: time="2025-12-07T22:25:59.842983783Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=02cbcfe53eee ep=k8s_POD_kube-proxy-9wczh_kube-system_6fa0d934-a836-42d2-b765-5ec6a1604cb1_0 net=host nid=facef7ad65e0
Dec 07 22:25:59 ubuntu-20-agent-9 cri-dockerd[148232]: time="2025-12-07T22:25:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0f0542f2c3c6ab8be18b4a5196564d4e2f9594330c22d3eb09f42a78a65dd0b7/resolv.conf as [nameserver 169.254.169.254 nameserver 169.254.169.254 search europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal]"
Dec 07 22:26:01 ubuntu-20-agent-9 cri-dockerd[148232]: time="2025-12-07T22:26:01Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/storage-provisioner:v5: Status: Downloaded newer image for gcr.io/k8s-minikube/storage-provisioner:v5"
Dec 07 22:26:03 ubuntu-20-agent-9 cri-dockerd[148232]: time="2025-12-07T22:26:03Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
Dec 07 22:27:50 ubuntu-20-agent-9 dockerd[147882]: time="2025-12-07T22:27:50.713713922Z" level=info msg="ignoring event" container=2a4d7cbfdb6e8ea660fe5abc412bcb872deed7009db46558534ff0f606cd7f18 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 07 22:29:40 ubuntu-20-agent-9 dockerd[147882]: time="2025-12-07T22:29:40.851308277Z" level=info msg="ignoring event" container=5fdec11da9016282fda6e771011a913a3c348dcde8953fd8db00ad5a7d0ae393 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
e3e5d324c35f2 52546a367cc9e 21 seconds ago Running coredns 2 e714ad346029c coredns-66bc5c9577-kt4qp kube-system
5fdec11da9016 52546a367cc9e 2 minutes ago Exited coredns 1 e714ad346029c coredns-66bc5c9577-kt4qp kube-system
01e4c2dcd3579 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 4 minutes ago Running storage-provisioner 0 60fa203878086 storage-provisioner kube-system
6bb00206726a6 8aa150647e88a 4 minutes ago Running kube-proxy 0 0f0542f2c3c6a kube-proxy-9wczh kube-system
bfd7403486544 88320b5498ff2 4 minutes ago Running kube-scheduler 0 8e659fa49e279 kube-scheduler-ubuntu-20-agent-9 kube-system
ce198f0e6597e 01e8bacf0f500 4 minutes ago Running kube-controller-manager 0 eeaf5b1438f0b kube-controller-manager-ubuntu-20-agent-9 kube-system
90207585b490b a3e246e9556e9 4 minutes ago Running etcd 0 299b8799f4fc0 etcd-ubuntu-20-agent-9 kube-system
32c7f93a6297a a5f569d49a979 4 minutes ago Running kube-apiserver 0 e68148de7219f kube-apiserver-ubuntu-20-agent-9 kube-system
==> coredns [5fdec11da901] <==
[ERROR] plugin/errors: 2 8113183778018567587.4326330418841677739. HINFO: read udp 10.244.0.2:33017->169.254.169.254:53: i/o timeout
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] 127.0.0.1:36583 - 43093 "HINFO IN 8113183778018567587.4326330418841677739. udp 57 false 512" - - 0 2.000314593s
[ERROR] plugin/errors: 2 8113183778018567587.4326330418841677739. HINFO: read udp 10.244.0.2:52174->169.254.169.254:53: i/o timeout
[INFO] 127.0.0.1:50318 - 16289 "HINFO IN 8113183778018567587.4326330418841677739. udp 57 false 512" - - 0 2.000920242s
[ERROR] plugin/errors: 2 8113183778018567587.4326330418841677739. HINFO: read udp 10.244.0.2:59678->169.254.169.254:53: i/o timeout
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> coredns [e3e5d324c35f] <==
maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 876af57068f747144f204884e843f6792435faec005aab1f10bd81e6ffca54e010e4374994d8f544c4f6711272ab5662d0892980e63ccc3ba8ba9e3fbcc5e4d9
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:39208 - 40515 "HINFO IN 2665249663174030882.7075666015284539314. udp 57 false 512" - - 0 6.00245305s
[ERROR] plugin/errors: 2 2665249663174030882.7075666015284539314. HINFO: read udp 10.244.0.2:39078->169.254.169.254:53: i/o timeout
[INFO] 127.0.0.1:43714 - 9509 "HINFO IN 2665249663174030882.7075666015284539314. udp 57 false 512" - - 0 6.001645109s
[ERROR] plugin/errors: 2 2665249663174030882.7075666015284539314. HINFO: read udp 10.244.0.2:34847->169.254.169.254:53: i/o timeout
[INFO] 127.0.0.1:50240 - 51173 "HINFO IN 2665249663174030882.7075666015284539314. udp 57 false 512" - - 0 2.001144807s
[ERROR] plugin/errors: 2 2665249663174030882.7075666015284539314. HINFO: read udp 10.244.0.2:55371->169.254.169.254:53: i/o timeout
[INFO] 127.0.0.1:48457 - 13303 "HINFO IN 2665249663174030882.7075666015284539314. udp 57 false 512" - - 0 6.002954056s
[ERROR] plugin/errors: 2 2665249663174030882.7075666015284539314. HINFO: read udp 10.244.0.2:46303->169.254.169.254:53: i/o timeout
[INFO] 127.0.0.1:35665 - 50101 "HINFO IN 2665249663174030882.7075666015284539314. udp 57 false 512" - - 0 2.000598519s
[ERROR] plugin/errors: 2 2665249663174030882.7075666015284539314. HINFO: read udp 10.244.0.2:55153->169.254.169.254:53: i/o timeout
==> describe nodes <==
Name: ubuntu-20-agent-9
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ubuntu-20-agent-9
kubernetes.io/os=linux
minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
minikube.k8s.io/name=minikube
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_07T22_25_54_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 07 Dec 2025 22:25:51 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ubuntu-20-agent-9
AcquireTime: <unset>
RenewTime: Sun, 07 Dec 2025 22:29:59 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sun, 07 Dec 2025 22:29:07 +0000 Sun, 07 Dec 2025 22:25:50 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 07 Dec 2025 22:29:07 +0000 Sun, 07 Dec 2025 22:25:50 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 07 Dec 2025 22:29:07 +0000 Sun, 07 Dec 2025 22:25:50 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 07 Dec 2025 22:29:07 +0000 Sun, 07 Dec 2025 22:25:51 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.154.0.4
Hostname: ubuntu-20-agent-9
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863360Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863360Ki
pods: 110
System Info:
Machine ID: 591c9f1229383743e2bfc56a050d43d1
System UUID: 4894487b-7b30-e033-3a9d-c6f45b6c4cf8
Boot ID: 88878944-d0e4-4c76-a724-927dfbd47d7a
Kernel Version: 6.8.0-1044-gcp
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://29.1.2
Kubelet Version: v1.34.2
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-66bc5c9577-kt4qp 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 4m2s
kube-system etcd-ubuntu-20-agent-9 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 4m8s
kube-system kube-apiserver-ubuntu-20-agent-9 250m (3%) 0 (0%) 0 (0%) 0 (0%) 4m9s
kube-system kube-controller-manager-ubuntu-20-agent-9 200m (2%) 0 (0%) 0 (0%) 0 (0%) 4m8s
kube-system kube-proxy-9wczh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m3s
kube-system kube-scheduler-ubuntu-20-agent-9 100m (1%) 0 (0%) 0 (0%) 0 (0%) 4m8s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m2s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (9%) 0 (0%)
memory 170Mi (0%) 170Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m1s kube-proxy
Normal Starting 4m14s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m14s (x8 over 4m14s) kubelet Node ubuntu-20-agent-9 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m14s (x8 over 4m14s) kubelet Node ubuntu-20-agent-9 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m14s (x7 over 4m14s) kubelet Node ubuntu-20-agent-9 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m14s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m8s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m8s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m8s kubelet Node ubuntu-20-agent-9 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m8s kubelet Node ubuntu-20-agent-9 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m8s kubelet Node ubuntu-20-agent-9 status is now: NodeHasSufficientPID
Normal RegisteredNode 4m3s node-controller Node ubuntu-20-agent-9 event: Registered Node ubuntu-20-agent-9 in Controller
==> dmesg <==
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff 8a 89 e3 60 02 36 08 06
[Dec 7 21:58] IPv4: martian source 10.244.0.1 from 10.244.0.38, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 d9 b0 85 a9 94 08 06
[Dec 7 22:00] IPv4: martian source 10.244.0.1 from 10.244.0.45, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 8a 07 8f 2e e1 08 06
[ +26.661440] IPv4: martian source 10.244.0.1 from 10.244.0.46, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e e6 11 ed fc 6a 08 06
[Dec 7 22:01] IPv4: martian source 10.244.0.1 from 10.244.0.47, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e 31 9c 21 4c 5a 08 06
[ +25.753282] IPv4: martian source 10.244.0.1 from 10.244.0.50, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 79 0a 5f 29 95 08 06
[Dec 7 22:03] IPv4: martian source 10.244.0.1 from 10.244.0.56, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 fe aa b9 57 64 08 06
[Dec 7 22:04] IPv4: martian source 10.244.0.1 from 10.244.0.58, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 8e 3e f1 e2 11 d2 08 06
[ +0.001633] IPv4: martian source 10.244.0.1 from 10.244.0.57, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a 09 d2 b4 36 ba 08 06
[Dec 7 22:05] IPv4: martian source 10.244.0.1 from 10.244.0.59, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff 7e a9 b2 5b d8 87 08 06
[Dec 7 22:06] IPv4: martian source 10.244.0.1 from 10.244.0.60, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff ca 56 20 0a 82 55 08 06
[Dec 7 22:11] IPv4: martian source 10.244.0.1 from 10.244.0.61, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a ca 82 67 5d 58 08 06
[Dec 7 22:25] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 6f 4f b0 c4 b6 08 06
==> etcd [90207585b490] <==
{"level":"warn","ts":"2025-12-07T22:25:50.452806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41716","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.460408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41732","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.469830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41750","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.477464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41780","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.484940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41794","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.492874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41816","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.500545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41828","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.508367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41838","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.515456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41864","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.523604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41880","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.532045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41904","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.539618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41912","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.546019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41940","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.554172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41952","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.561539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41962","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.568876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41982","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.577404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42008","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.585548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42030","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.592144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42060","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.598726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42084","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.605320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42116","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.612918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42140","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.630005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42170","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.637531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42190","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-07T22:25:50.644931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42200","server-name":"","error":"EOF"}
==> kernel <==
22:30:01 up 1:12, 0 users, load average: 0.15, 0.39, 0.48
Linux ubuntu-20-agent-9 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kube-apiserver [32c7f93a6297] <==
I1207 22:25:51.196671 1 policy_source.go:240] refreshing policies
E1207 22:25:51.228830 1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
I1207 22:25:51.273992 1 controller.go:667] quota admission added evaluator for: namespaces
I1207 22:25:51.278939 1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
I1207 22:25:51.279178 1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
I1207 22:25:51.284699 1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
I1207 22:25:51.285628 1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
I1207 22:25:51.364862 1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
I1207 22:25:52.077037 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I1207 22:25:52.080808 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I1207 22:25:52.080830 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I1207 22:25:52.531408 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1207 22:25:52.564413 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1207 22:25:52.681771 1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
W1207 22:25:52.687619 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [10.154.0.4]
I1207 22:25:52.688888 1 controller.go:667] quota admission added evaluator for: endpoints
I1207 22:25:52.692928 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1207 22:25:53.126381 1 controller.go:667] quota admission added evaluator for: serviceaccounts
I1207 22:25:53.497631 1 controller.go:667] quota admission added evaluator for: deployments.apps
I1207 22:25:53.508517 1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
I1207 22:25:53.519160 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I1207 22:25:58.831567 1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
I1207 22:25:58.979030 1 controller.go:667] quota admission added evaluator for: replicasets.apps
I1207 22:25:59.129059 1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
I1207 22:25:59.133484 1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
==> kube-controller-manager [ce198f0e6597] <==
I1207 22:25:58.102882 1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
I1207 22:25:58.102917 1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
I1207 22:25:58.102927 1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
I1207 22:25:58.103861 1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
I1207 22:25:58.109606 1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ubuntu-20-agent-9" podCIDRs=["10.244.0.0/24"]
I1207 22:25:58.124373 1 shared_informer.go:356] "Caches are synced" controller="disruption"
I1207 22:25:58.125623 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
I1207 22:25:58.125673 1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
I1207 22:25:58.125693 1 shared_informer.go:356] "Caches are synced" controller="PV protection"
I1207 22:25:58.125717 1 shared_informer.go:356] "Caches are synced" controller="TTL"
I1207 22:25:58.125785 1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
I1207 22:25:58.125789 1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
I1207 22:25:58.125958 1 shared_informer.go:356] "Caches are synced" controller="cronjob"
I1207 22:25:58.126007 1 shared_informer.go:356] "Caches are synced" controller="job"
I1207 22:25:58.126249 1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
I1207 22:25:58.126281 1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
I1207 22:25:58.127377 1 shared_informer.go:356] "Caches are synced" controller="stateful set"
I1207 22:25:58.127401 1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
I1207 22:25:58.127422 1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
I1207 22:25:58.129666 1 shared_informer.go:356] "Caches are synced" controller="attach detach"
I1207 22:25:58.130795 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1207 22:25:58.130801 1 shared_informer.go:356] "Caches are synced" controller="namespace"
I1207 22:25:58.137050 1 shared_informer.go:356] "Caches are synced" controller="expand"
I1207 22:25:58.142310 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1207 22:25:58.143358 1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
==> kube-proxy [6bb00206726a] <==
I1207 22:26:00.012322 1 server_linux.go:53] "Using iptables proxy"
I1207 22:26:00.077102 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1207 22:26:00.177191 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1207 22:26:00.177239 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["10.154.0.4"]
E1207 22:26:00.177358 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1207 22:26:00.200314 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1207 22:26:00.200367 1 server_linux.go:132] "Using iptables Proxier"
I1207 22:26:00.206207 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1207 22:26:00.206578 1 server.go:527] "Version info" version="v1.34.2"
I1207 22:26:00.206618 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1207 22:26:00.208254 1 config.go:309] "Starting node config controller"
I1207 22:26:00.208272 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1207 22:26:00.208281 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1207 22:26:00.208282 1 config.go:403] "Starting serviceCIDR config controller"
I1207 22:26:00.208307 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1207 22:26:00.208384 1 config.go:200] "Starting service config controller"
I1207 22:26:00.208392 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1207 22:26:00.208429 1 config.go:106] "Starting endpoint slice config controller"
I1207 22:26:00.208444 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1207 22:26:00.309295 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1207 22:26:00.309316 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1207 22:26:00.309290 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
==> kube-scheduler [bfd740348654] <==
E1207 22:25:51.124564 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1207 22:25:51.124731 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1207 22:25:51.124743 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1207 22:25:51.124797 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1207 22:25:51.124925 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1207 22:25:51.124964 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1207 22:25:51.125005 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1207 22:25:51.125040 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1207 22:25:51.125134 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1207 22:25:51.125220 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1207 22:25:51.125350 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1207 22:25:51.125408 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1207 22:25:51.125479 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1207 22:25:51.927759 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1207 22:25:51.960488 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1207 22:25:51.969515 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1207 22:25:51.995777 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1207 22:25:52.044408 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1207 22:25:52.154024 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1207 22:25:52.180059 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1207 22:25:52.200364 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1207 22:25:52.205471 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1207 22:25:52.252740 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1207 22:25:52.262872 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
I1207 22:25:54.621055 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Dec 07 22:25:58 ubuntu-20-agent-9 kubelet[149602]: E1207 22:25:58.996755 149602 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6fa0d934-a836-42d2-b765-5ec6a1604cb1-kube-api-access-98m7p podName:6fa0d934-a836-42d2-b765-5ec6a1604cb1 nodeName:}" failed. No retries permitted until 2025-12-07 22:25:59.496722355 +0000 UTC m=+6.105064583 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-98m7p" (UniqueName: "kubernetes.io/projected/6fa0d934-a836-42d2-b765-5ec6a1604cb1-kube-api-access-98m7p") pod "kube-proxy-9wczh" (UID: "6fa0d934-a836-42d2-b765-5ec6a1604cb1") : configmap "kube-root-ca.crt" not found
Dec 07 22:25:59 ubuntu-20-agent-9 kubelet[149602]: E1207 22:25:59.265771 149602 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[config-volume kube-api-access-rkm67], unattached volumes=[], failed to process volumes=[config-volume kube-api-access-rkm67]: context canceled" pod="kube-system/coredns-66bc5c9577-2tjgd" podUID="51678b09-97ed-4f1b-86a2-6bf589b0df9c"
Dec 07 22:25:59 ubuntu-20-agent-9 kubelet[149602]: I1207 22:25:59.289234 149602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9fvx\" (UniqueName: \"kubernetes.io/projected/1fca3b52-fea6-4632-a2dd-edabd44d6fa7-kube-api-access-l9fvx\") pod \"storage-provisioner\" (UID: \"1fca3b52-fea6-4632-a2dd-edabd44d6fa7\") " pod="kube-system/storage-provisioner"
Dec 07 22:25:59 ubuntu-20-agent-9 kubelet[149602]: I1207 22:25:59.289620 149602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18a87cdb-7c0e-44bc-a68d-97c9c261e65c-config-volume\") pod \"coredns-66bc5c9577-kt4qp\" (UID: \"18a87cdb-7c0e-44bc-a68d-97c9c261e65c\") " pod="kube-system/coredns-66bc5c9577-kt4qp"
Dec 07 22:25:59 ubuntu-20-agent-9 kubelet[149602]: I1207 22:25:59.289715 149602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-859td\" (UniqueName: \"kubernetes.io/projected/18a87cdb-7c0e-44bc-a68d-97c9c261e65c-kube-api-access-859td\") pod \"coredns-66bc5c9577-kt4qp\" (UID: \"18a87cdb-7c0e-44bc-a68d-97c9c261e65c\") " pod="kube-system/coredns-66bc5c9577-kt4qp"
Dec 07 22:25:59 ubuntu-20-agent-9 kubelet[149602]: I1207 22:25:59.289760 149602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51678b09-97ed-4f1b-86a2-6bf589b0df9c-config-volume\") pod \"coredns-66bc5c9577-2tjgd\" (UID: \"51678b09-97ed-4f1b-86a2-6bf589b0df9c\") " pod="kube-system/coredns-66bc5c9577-2tjgd"
Dec 07 22:25:59 ubuntu-20-agent-9 kubelet[149602]: I1207 22:25:59.289823 149602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkm67\" (UniqueName: \"kubernetes.io/projected/51678b09-97ed-4f1b-86a2-6bf589b0df9c-kube-api-access-rkm67\") pod \"coredns-66bc5c9577-2tjgd\" (UID: \"51678b09-97ed-4f1b-86a2-6bf589b0df9c\") " pod="kube-system/coredns-66bc5c9577-2tjgd"
Dec 07 22:25:59 ubuntu-20-agent-9 kubelet[149602]: I1207 22:25:59.289848 149602 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1fca3b52-fea6-4632-a2dd-edabd44d6fa7-tmp\") pod \"storage-provisioner\" (UID: \"1fca3b52-fea6-4632-a2dd-edabd44d6fa7\") " pod="kube-system/storage-provisioner"
Dec 07 22:25:59 ubuntu-20-agent-9 kubelet[149602]: I1207 22:25:59.685191 149602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e714ad346029c1fb4c2898fca1002dd1671bffc45c66ca48abbdf7004516d80e"
Dec 07 22:25:59 ubuntu-20-agent-9 kubelet[149602]: I1207 22:25:59.688609 149602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60fa2038780864de592ca140d8bf4a234d69504055474d12c0e55c6cd0c5c918"
Dec 07 22:25:59 ubuntu-20-agent-9 kubelet[149602]: I1207 22:25:59.794464 149602 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51678b09-97ed-4f1b-86a2-6bf589b0df9c-config-volume\") pod \"51678b09-97ed-4f1b-86a2-6bf589b0df9c\" (UID: \"51678b09-97ed-4f1b-86a2-6bf589b0df9c\") "
Dec 07 22:25:59 ubuntu-20-agent-9 kubelet[149602]: I1207 22:25:59.794519 149602 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkm67\" (UniqueName: \"kubernetes.io/projected/51678b09-97ed-4f1b-86a2-6bf589b0df9c-kube-api-access-rkm67\") pod \"51678b09-97ed-4f1b-86a2-6bf589b0df9c\" (UID: \"51678b09-97ed-4f1b-86a2-6bf589b0df9c\") "
Dec 07 22:25:59 ubuntu-20-agent-9 kubelet[149602]: I1207 22:25:59.794870 149602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51678b09-97ed-4f1b-86a2-6bf589b0df9c-config-volume" (OuterVolumeSpecName: "config-volume") pod "51678b09-97ed-4f1b-86a2-6bf589b0df9c" (UID: "51678b09-97ed-4f1b-86a2-6bf589b0df9c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
Dec 07 22:25:59 ubuntu-20-agent-9 kubelet[149602]: I1207 22:25:59.797340 149602 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51678b09-97ed-4f1b-86a2-6bf589b0df9c-kube-api-access-rkm67" (OuterVolumeSpecName: "kube-api-access-rkm67") pod "51678b09-97ed-4f1b-86a2-6bf589b0df9c" (UID: "51678b09-97ed-4f1b-86a2-6bf589b0df9c"). InnerVolumeSpecName "kube-api-access-rkm67". PluginName "kubernetes.io/projected", VolumeGIDValue ""
Dec 07 22:25:59 ubuntu-20-agent-9 kubelet[149602]: I1207 22:25:59.896748 149602 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rkm67\" (UniqueName: \"kubernetes.io/projected/51678b09-97ed-4f1b-86a2-6bf589b0df9c-kube-api-access-rkm67\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
Dec 07 22:25:59 ubuntu-20-agent-9 kubelet[149602]: I1207 22:25:59.896785 149602 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51678b09-97ed-4f1b-86a2-6bf589b0df9c-config-volume\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
Dec 07 22:26:00 ubuntu-20-agent-9 kubelet[149602]: I1207 22:26:00.725171 149602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kt4qp" podStartSLOduration=1.725149659 podStartE2EDuration="1.725149659s" podCreationTimestamp="2025-12-07 22:25:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 22:26:00.724996292 +0000 UTC m=+7.333338527" watchObservedRunningTime="2025-12-07 22:26:00.725149659 +0000 UTC m=+7.333491893"
Dec 07 22:26:00 ubuntu-20-agent-9 kubelet[149602]: I1207 22:26:00.735112 149602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9wczh" podStartSLOduration=2.735081777 podStartE2EDuration="2.735081777s" podCreationTimestamp="2025-12-07 22:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 22:26:00.734462014 +0000 UTC m=+7.342804247" watchObservedRunningTime="2025-12-07 22:26:00.735081777 +0000 UTC m=+7.343424010"
Dec 07 22:26:01 ubuntu-20-agent-9 kubelet[149602]: I1207 22:26:01.482022 149602 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51678b09-97ed-4f1b-86a2-6bf589b0df9c" path="/var/lib/kubelet/pods/51678b09-97ed-4f1b-86a2-6bf589b0df9c/volumes"
Dec 07 22:26:01 ubuntu-20-agent-9 kubelet[149602]: I1207 22:26:01.741110 149602 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Dec 07 22:26:01 ubuntu-20-agent-9 kubelet[149602]: I1207 22:26:01.750820 149602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.172381894 podStartE2EDuration="2.750799012s" podCreationTimestamp="2025-12-07 22:25:59 +0000 UTC" firstStartedPulling="2025-12-07 22:25:59.656872179 +0000 UTC m=+6.265214396" lastFinishedPulling="2025-12-07 22:26:01.235289286 +0000 UTC m=+7.843631514" observedRunningTime="2025-12-07 22:26:01.75065789 +0000 UTC m=+8.359000124" watchObservedRunningTime="2025-12-07 22:26:01.750799012 +0000 UTC m=+8.359141245"
Dec 07 22:26:03 ubuntu-20-agent-9 kubelet[149602]: I1207 22:26:03.938710 149602 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Dec 07 22:26:03 ubuntu-20-agent-9 kubelet[149602]: I1207 22:26:03.939714 149602 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Dec 07 22:26:05 ubuntu-20-agent-9 kubelet[149602]: I1207 22:26:05.723075 149602 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Dec 07 22:29:41 ubuntu-20-agent-9 kubelet[149602]: I1207 22:29:41.863308 149602 scope.go:117] "RemoveContainer" containerID="2a4d7cbfdb6e8ea660fe5abc412bcb872deed7009db46558534ff0f606cd7f18"
==> storage-provisioner [01e4c2dcd357] <==
W1207 22:29:36.222972 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:29:38.226296 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:29:38.230003 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:29:40.233508 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:29:40.238662 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:29:42.241818 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:29:42.245652 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:29:44.248940 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:29:44.254025 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:29:46.257080 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:29:46.261048 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:29:48.264443 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:29:48.269515 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:29:50.272835 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:29:50.276761 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:29:52.280218 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:29:52.285217 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:29:54.288545 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:29:54.292462 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:29:56.295464 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:29:56.300379 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:29:58.303543 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:29:58.306972 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:30:00.310054 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1207 22:30:00.313887 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:269: (dbg) Run: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestOffline FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.69510272s)
--- FAIL: TestOffline (275.77s)