=== RUN TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run: out/minikube-linux-amd64 node add -p multinode-439307 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-439307 -v=5 --alsologtostderr: exit status 80 (10.279892572s)
-- stdout --
* Adding node m03 to cluster multinode-439307 as [worker]
* Starting "multinode-439307-m03" worker node in "multinode-439307" cluster
* Pulling base image v0.0.48-1759745255-21703 ...
* Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
-- /stdout --
** stderr **
I1008 14:24:31.503412 661380 out.go:360] Setting OutFile to fd 1 ...
I1008 14:24:31.503718 661380 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:24:31.503726 661380 out.go:374] Setting ErrFile to fd 2...
I1008 14:24:31.503731 661380 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:24:31.503960 661380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-513010/.minikube/bin
I1008 14:24:31.504316 661380 mustload.go:65] Loading cluster: multinode-439307
I1008 14:24:31.504705 661380 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1008 14:24:31.505135 661380 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
I1008 14:24:31.522203 661380 host.go:66] Checking if "multinode-439307" exists ...
I1008 14:24:31.522474 661380 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1008 14:24:31.580691 661380 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-08 14:24:31.570749338 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1008 14:24:31.580805 661380 api_server.go:166] Checking apiserver status ...
I1008 14:24:31.580849 661380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1008 14:24:31.580888 661380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
I1008 14:24:31.598058 661380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
I1008 14:24:31.705848 661380 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1367/cgroup
W1008 14:24:31.714292 661380 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1367/cgroup: Process exited with status 1
stdout:
stderr:
I1008 14:24:31.714345 661380 ssh_runner.go:195] Run: ls
I1008 14:24:31.718176 661380 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I1008 14:24:31.723066 661380 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
ok
I1008 14:24:31.725056 661380 out.go:179] * Adding node m03 to cluster multinode-439307 as [worker]
I1008 14:24:31.726619 661380 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1008 14:24:31.726784 661380 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json ...
I1008 14:24:31.728528 661380 out.go:179] * Starting "multinode-439307-m03" worker node in "multinode-439307" cluster
I1008 14:24:31.729540 661380 cache.go:123] Beginning downloading kic base image for docker with containerd
I1008 14:24:31.730718 661380 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
I1008 14:24:31.732198 661380 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1008 14:24:31.732231 661380 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
I1008 14:24:31.732241 661380 cache.go:58] Caching tarball of preloaded images
I1008 14:24:31.732289 661380 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
I1008 14:24:31.732319 661380 preload.go:233] Found /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I1008 14:24:31.732327 661380 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
I1008 14:24:31.732427 661380 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json ...
I1008 14:24:31.753268 661380 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
I1008 14:24:31.753290 661380 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
I1008 14:24:31.753310 661380 cache.go:232] Successfully downloaded all kic artifacts
I1008 14:24:31.753345 661380 start.go:360] acquireMachinesLock for multinode-439307-m03: {Name:mkc57b0699e109bd3e6a21447d35a5a5dbc2c025 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1008 14:24:31.753459 661380 start.go:364] duration metric: took 89.211µs to acquireMachinesLock for "multinode-439307-m03"
I1008 14:24:31.753489 661380 start.go:93] Provisioning new machine with config: &{Name:multinode-439307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false
kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
I1008 14:24:31.753626 661380 start.go:125] createHost starting for "m03" (driver="docker")
I1008 14:24:31.755481 661380 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1008 14:24:31.755601 661380 start.go:159] libmachine.API.Create for "multinode-439307" (driver="docker")
I1008 14:24:31.755633 661380 client.go:168] LocalClient.Create starting
I1008 14:24:31.755724 661380 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem
I1008 14:24:31.755768 661380 main.go:141] libmachine: Decoding PEM data...
I1008 14:24:31.755790 661380 main.go:141] libmachine: Parsing certificate...
I1008 14:24:31.755858 661380 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem
I1008 14:24:31.755887 661380 main.go:141] libmachine: Decoding PEM data...
I1008 14:24:31.755904 661380 main.go:141] libmachine: Parsing certificate...
I1008 14:24:31.756194 661380 cli_runner.go:164] Run: docker network inspect multinode-439307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1008 14:24:31.773512 661380 network_create.go:77] Found existing network {name:multinode-439307 subnet:0xc00150bad0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
I1008 14:24:31.773574 661380 kic.go:121] calculated static IP "192.168.67.4" for the "multinode-439307-m03" container
I1008 14:24:31.773658 661380 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1008 14:24:31.791196 661380 cli_runner.go:164] Run: docker volume create multinode-439307-m03 --label name.minikube.sigs.k8s.io=multinode-439307-m03 --label created_by.minikube.sigs.k8s.io=true
I1008 14:24:31.808906 661380 oci.go:103] Successfully created a docker volume multinode-439307-m03
I1008 14:24:31.809067 661380 cli_runner.go:164] Run: docker run --rm --name multinode-439307-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-439307-m03 --entrypoint /usr/bin/test -v multinode-439307-m03:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
I1008 14:24:32.191922 661380 oci.go:107] Successfully prepared a docker volume multinode-439307-m03
I1008 14:24:32.191990 661380 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1008 14:24:32.192021 661380 kic.go:194] Starting extracting preloaded images to volume ...
I1008 14:24:32.192114 661380 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-439307-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
I1008 14:24:36.576077 661380 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-439307-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.383913167s)
I1008 14:24:36.576107 661380 kic.go:203] duration metric: took 4.384083442s to extract preloaded images to volume ...
W1008 14:24:36.576193 661380 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W1008 14:24:36.576242 661380 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
I1008 14:24:36.576290 661380 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1008 14:24:36.632796 661380 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-439307-m03 --name multinode-439307-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-439307-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-439307-m03 --network multinode-439307 --ip 192.168.67.4 --volume multinode-439307-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
I1008 14:24:36.911015 661380 cli_runner.go:164] Run: docker container inspect multinode-439307-m03 --format={{.State.Running}}
I1008 14:24:36.929548 661380 cli_runner.go:164] Run: docker container inspect multinode-439307-m03 --format={{.State.Status}}
I1008 14:24:36.947649 661380 cli_runner.go:164] Run: docker exec multinode-439307-m03 stat /var/lib/dpkg/alternatives/iptables
I1008 14:24:36.992847 661380 oci.go:144] the created container "multinode-439307-m03" has a running status.
I1008 14:24:36.992885 661380 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m03/id_rsa...
I1008 14:24:37.503926 661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I1008 14:24:37.503988 661380 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1008 14:24:37.529265 661380 cli_runner.go:164] Run: docker container inspect multinode-439307-m03 --format={{.State.Status}}
I1008 14:24:37.547284 661380 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1008 14:24:37.547312 661380 kic_runner.go:114] Args: [docker exec --privileged multinode-439307-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
I1008 14:24:37.593870 661380 cli_runner.go:164] Run: docker container inspect multinode-439307-m03 --format={{.State.Status}}
I1008 14:24:37.612364 661380 machine.go:93] provisionDockerMachine start ...
I1008 14:24:37.612466 661380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m03
I1008 14:24:37.631009 661380 main.go:141] libmachine: Using SSH client type: native
I1008 14:24:37.631268 661380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33316 <nil> <nil>}
I1008 14:24:37.631281 661380 main.go:141] libmachine: About to run SSH command:
hostname
I1008 14:24:37.778664 661380 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-439307-m03
I1008 14:24:37.778691 661380 ubuntu.go:182] provisioning hostname "multinode-439307-m03"
I1008 14:24:37.778762 661380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m03
I1008 14:24:37.797218 661380 main.go:141] libmachine: Using SSH client type: native
I1008 14:24:37.797493 661380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33316 <nil> <nil>}
I1008 14:24:37.797515 661380 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-439307-m03 && echo "multinode-439307-m03" | sudo tee /etc/hostname
I1008 14:24:37.954008 661380 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-439307-m03
I1008 14:24:37.954092 661380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m03
I1008 14:24:37.971585 661380 main.go:141] libmachine: Using SSH client type: native
I1008 14:24:37.971806 661380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33316 <nil> <nil>}
I1008 14:24:37.971830 661380 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-439307-m03' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-439307-m03/g' /etc/hosts;
else
echo '127.0.1.1 multinode-439307-m03' | sudo tee -a /etc/hosts;
fi
fi
I1008 14:24:38.117676 661380 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1008 14:24:38.117707 661380 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-513010/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-513010/.minikube}
I1008 14:24:38.117745 661380 ubuntu.go:190] setting up certificates
I1008 14:24:38.117759 661380 provision.go:84] configureAuth start
I1008 14:24:38.117820 661380 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307-m03
I1008 14:24:38.135537 661380 provision.go:143] copyHostCerts
I1008 14:24:38.135581 661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem
I1008 14:24:38.135617 661380 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem, removing ...
I1008 14:24:38.135641 661380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem
I1008 14:24:38.135720 661380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem (1078 bytes)
I1008 14:24:38.135837 661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem
I1008 14:24:38.135864 661380 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem, removing ...
I1008 14:24:38.135872 661380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem
I1008 14:24:38.135917 661380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem (1123 bytes)
I1008 14:24:38.136032 661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem
I1008 14:24:38.136058 661380 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem, removing ...
I1008 14:24:38.136068 661380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem
I1008 14:24:38.136115 661380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem (1675 bytes)
I1008 14:24:38.136204 661380 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem org=jenkins.multinode-439307-m03 san=[127.0.0.1 192.168.67.4 localhost minikube multinode-439307-m03]
I1008 14:24:38.432676 661380 provision.go:177] copyRemoteCerts
I1008 14:24:38.432761 661380 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1008 14:24:38.432834 661380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m03
I1008 14:24:38.450974 661380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33316 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m03/id_rsa Username:docker}
I1008 14:24:38.554844 661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1008 14:24:38.554934 661380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1008 14:24:38.576103 661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1008 14:24:38.576182 661380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1008 14:24:38.594092 661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem -> /etc/docker/server.pem
I1008 14:24:38.594205 661380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
I1008 14:24:38.612932 661380 provision.go:87] duration metric: took 495.153996ms to configureAuth
I1008 14:24:38.612966 661380 ubuntu.go:206] setting minikube options for container-runtime
I1008 14:24:38.613233 661380 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1008 14:24:38.613250 661380 machine.go:96] duration metric: took 1.000862975s to provisionDockerMachine
I1008 14:24:38.613258 661380 client.go:171] duration metric: took 6.857615152s to LocalClient.Create
I1008 14:24:38.613280 661380 start.go:167] duration metric: took 6.857680336s to libmachine.API.Create "multinode-439307"
I1008 14:24:38.613294 661380 start.go:293] postStartSetup for "multinode-439307-m03" (driver="docker")
I1008 14:24:38.613304 661380 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1008 14:24:38.613354 661380 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1008 14:24:38.613392 661380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m03
I1008 14:24:38.631341 661380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33316 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m03/id_rsa Username:docker}
I1008 14:24:38.740533 661380 ssh_runner.go:195] Run: cat /etc/os-release
I1008 14:24:38.744305 661380 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1008 14:24:38.744331 661380 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1008 14:24:38.744343 661380 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-513010/.minikube/addons for local assets ...
I1008 14:24:38.744392 661380 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-513010/.minikube/files for local assets ...
I1008 14:24:38.744482 661380 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> 5167872.pem in /etc/ssl/certs
I1008 14:24:38.744495 661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> /etc/ssl/certs/5167872.pem
I1008 14:24:38.744596 661380 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1008 14:24:38.752519 661380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem --> /etc/ssl/certs/5167872.pem (1708 bytes)
I1008 14:24:38.773530 661380 start.go:296] duration metric: took 160.218ms for postStartSetup
I1008 14:24:38.774021 661380 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307-m03
I1008 14:24:38.790724 661380 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json ...
I1008 14:24:38.791012 661380 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1008 14:24:38.791058 661380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m03
I1008 14:24:38.809074 661380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33316 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m03/id_rsa Username:docker}
I1008 14:24:38.910315 661380 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1008 14:24:38.915033 661380 start.go:128] duration metric: took 7.161389622s to createHost
I1008 14:24:38.915060 661380 start.go:83] releasing machines lock for "multinode-439307-m03", held for 7.161587943s
I1008 14:24:38.915141 661380 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307-m03
I1008 14:24:38.933280 661380 ssh_runner.go:195] Run: systemctl --version
I1008 14:24:38.933326 661380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m03
I1008 14:24:38.933355 661380 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1008 14:24:38.933418 661380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m03
I1008 14:24:38.952475 661380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33316 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m03/id_rsa Username:docker}
I1008 14:24:38.952825 661380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33316 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m03/id_rsa Username:docker}
I1008 14:24:39.055164 661380 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1008 14:24:39.104663 661380 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1008 14:24:39.104730 661380 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1008 14:24:39.131676 661380 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1008 14:24:39.131706 661380 start.go:495] detecting cgroup driver to use...
I1008 14:24:39.131742 661380 detect.go:190] detected "systemd" cgroup driver on host os
I1008 14:24:39.131808 661380 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1008 14:24:39.147629 661380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1008 14:24:39.161143 661380 docker.go:218] disabling cri-docker service (if available) ...
I1008 14:24:39.161203 661380 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1008 14:24:39.178663 661380 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1008 14:24:39.196725 661380 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1008 14:24:39.278030 661380 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1008 14:24:39.365597 661380 docker.go:234] disabling docker service ...
I1008 14:24:39.365664 661380 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1008 14:24:39.385105 661380 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1008 14:24:39.397955 661380 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1008 14:24:39.481537 661380 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1008 14:24:39.562465 661380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1008 14:24:39.575732 661380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1008 14:24:39.591093 661380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1008 14:24:39.602384 661380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1008 14:24:39.612292 661380 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
I1008 14:24:39.612374 661380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1008 14:24:39.622133 661380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1008 14:24:39.631552 661380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1008 14:24:39.640967 661380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1008 14:24:39.650649 661380 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1008 14:24:39.660016 661380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1008 14:24:39.669597 661380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1008 14:24:39.679285 661380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1008 14:24:39.688793 661380 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1008 14:24:39.696882 661380 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1008 14:24:39.704845 661380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1008 14:24:39.783731 661380 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1008 14:24:39.892869 661380 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I1008 14:24:39.892945 661380 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1008 14:24:39.897222 661380 start.go:563] Will wait 60s for crictl version
I1008 14:24:39.897273 661380 ssh_runner.go:195] Run: which crictl
I1008 14:24:39.900992 661380 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1008 14:24:39.925343 661380 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.28
RuntimeApiVersion: v1
I1008 14:24:39.925417 661380 ssh_runner.go:195] Run: containerd --version
I1008 14:24:39.951454 661380 ssh_runner.go:195] Run: containerd --version
I1008 14:24:39.978998 661380 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
I1008 14:24:39.980242 661380 cli_runner.go:164] Run: docker network inspect multinode-439307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1008 14:24:39.998071 661380 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I1008 14:24:40.002653 661380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1008 14:24:40.013276 661380 mustload.go:65] Loading cluster: multinode-439307
I1008 14:24:40.013523 661380 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1008 14:24:40.013742 661380 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
I1008 14:24:40.030860 661380 host.go:66] Checking if "multinode-439307" exists ...
I1008 14:24:40.031165 661380 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307 for IP: 192.168.67.4
I1008 14:24:40.031179 661380 certs.go:195] generating shared ca certs ...
I1008 14:24:40.031197 661380 certs.go:227] acquiring lock for ca certs: {Name:mk57aa9b2383fcc0908491da1ce926c707ff69a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 14:24:40.031364 661380 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key
I1008 14:24:40.031427 661380 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key
I1008 14:24:40.031445 661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1008 14:24:40.031467 661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1008 14:24:40.031485 661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1008 14:24:40.031502 661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1008 14:24:40.031574 661380 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem (1338 bytes)
W1008 14:24:40.031607 661380 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787_empty.pem, impossibly tiny 0 bytes
I1008 14:24:40.031617 661380 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem (1675 bytes)
I1008 14:24:40.031646 661380 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem (1078 bytes)
I1008 14:24:40.031671 661380 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem (1123 bytes)
I1008 14:24:40.031694 661380 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem (1675 bytes)
I1008 14:24:40.031736 661380 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem (1708 bytes)
I1008 14:24:40.031774 661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1008 14:24:40.031787 661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem -> /usr/share/ca-certificates/516787.pem
I1008 14:24:40.031799 661380 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> /usr/share/ca-certificates/5167872.pem
I1008 14:24:40.031819 661380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1008 14:24:40.051762 661380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1008 14:24:40.070015 661380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1008 14:24:40.088946 661380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1008 14:24:40.106920 661380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1008 14:24:40.127565 661380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem --> /usr/share/ca-certificates/516787.pem (1338 bytes)
I1008 14:24:40.145655 661380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem --> /usr/share/ca-certificates/5167872.pem (1708 bytes)
I1008 14:24:40.163310 661380 ssh_runner.go:195] Run: openssl version
I1008 14:24:40.170037 661380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1008 14:24:40.178821 661380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1008 14:24:40.183226 661380 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 8 14:03 /usr/share/ca-certificates/minikubeCA.pem
I1008 14:24:40.183309 661380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1008 14:24:40.219078 661380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1008 14:24:40.228740 661380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516787.pem && ln -fs /usr/share/ca-certificates/516787.pem /etc/ssl/certs/516787.pem"
I1008 14:24:40.237915 661380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516787.pem
I1008 14:24:40.242516 661380 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 8 14:09 /usr/share/ca-certificates/516787.pem
I1008 14:24:40.242603 661380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516787.pem
I1008 14:24:40.278280 661380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516787.pem /etc/ssl/certs/51391683.0"
I1008 14:24:40.288345 661380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5167872.pem && ln -fs /usr/share/ca-certificates/5167872.pem /etc/ssl/certs/5167872.pem"
I1008 14:24:40.297682 661380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5167872.pem
I1008 14:24:40.301713 661380 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 8 14:09 /usr/share/ca-certificates/5167872.pem
I1008 14:24:40.301777 661380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5167872.pem
I1008 14:24:40.339504 661380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5167872.pem /etc/ssl/certs/3ec20f2e.0"
I1008 14:24:40.349876 661380 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1008 14:24:40.354004 661380 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1008 14:24:40.354048 661380 kubeadm.go:934] updating node {m03 192.168.67.4 0 v1.34.1 false true} ...
I1008 14:24:40.354193 661380 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-439307-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.4
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1008 14:24:40.354254 661380 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1008 14:24:40.362626 661380 binaries.go:44] Found k8s binaries, skipping transfer
I1008 14:24:40.362688 661380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I1008 14:24:40.370788 661380 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
I1008 14:24:40.383722 661380 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1008 14:24:40.399089 661380 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I1008 14:24:40.402842 661380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1008 14:24:40.413206 661380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1008 14:24:40.491846 661380 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1008 14:24:40.516623 661380 host.go:66] Checking if "multinode-439307" exists ...
I1008 14:24:40.516905 661380 start.go:317] joinCluster: &{Name:multinode-439307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:false Worker:true} {Name:m03 IP:192.168.67.4 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fa
lse kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1008 14:24:40.517090 661380 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm token create --print-join-command --ttl=0"
I1008 14:24:40.517140 661380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
I1008 14:24:40.535598 661380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
I1008 14:24:40.687024 661380 start.go:343] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.67.4 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
I1008 14:24:40.687109 661380 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8n8r8s.dukoa1mhefvohilp --discovery-token-ca-cert-hash sha256:29d08006b7495c78b5f27aaa9701a82f373226e18456de05d156e89bccfbd06f --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=multinode-439307-m03"
I1008 14:24:41.456440 661380 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
I1008 14:24:41.657201 661380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-439307-m03 minikube.k8s.io/updated_at=2025_10_08T14_24_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=multinode-439307 minikube.k8s.io/primary=false
I1008 14:24:41.724463 661380 start.go:319] duration metric: took 1.207555044s to joinCluster
I1008 14:24:41.726377 661380 out.go:203]
W1008 14:24:41.727647 661380 out.go:285] X Exiting due to GUEST_NODE_ADD: failed to add node: join node to cluster: error applying worker node "m03" label: apply node labels: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-439307-m03 minikube.k8s.io/updated_at=2025_10_08T14_24_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=multinode-439307 minikube.k8s.io/primary=false: Process exited with status 1
stdout:
stderr:
Error from server (NotFound): nodes "multinode-439307-m03" not found
X Exiting due to GUEST_NODE_ADD: failed to add node: join node to cluster: error applying worker node "m03" label: apply node labels: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-439307-m03 minikube.k8s.io/updated_at=2025_10_08T14_24_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=multinode-439307 minikube.k8s.io/primary=false: Process exited with status 1
stdout:
stderr:
Error from server (NotFound): nodes "multinode-439307-m03" not found
W1008 14:24:41.727666 661380 out.go:285] *
*
W1008 14:24:41.732182 661380 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1008 14:24:41.733477 661380 out.go:203]
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-linux-amd64 node add -p multinode-439307 -v=5 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestMultiNode/serial/AddNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect multinode-439307
helpers_test.go:243: (dbg) docker inspect multinode-439307:
-- stdout --
[
{
"Id": "ba6a97f7663632c0b65e5a11595af47165321c8afb3b62eedf06b3d466307bba",
"Created": "2025-10-08T14:23:23.101908381Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 655454,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-10-08T14:23:23.137079331Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
"ResolvConfPath": "/var/lib/docker/containers/ba6a97f7663632c0b65e5a11595af47165321c8afb3b62eedf06b3d466307bba/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/ba6a97f7663632c0b65e5a11595af47165321c8afb3b62eedf06b3d466307bba/hostname",
"HostsPath": "/var/lib/docker/containers/ba6a97f7663632c0b65e5a11595af47165321c8afb3b62eedf06b3d466307bba/hosts",
"LogPath": "/var/lib/docker/containers/ba6a97f7663632c0b65e5a11595af47165321c8afb3b62eedf06b3d466307bba/ba6a97f7663632c0b65e5a11595af47165321c8afb3b62eedf06b3d466307bba-json.log",
"Name": "/multinode-439307",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"multinode-439307:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "multinode-439307",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "private",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 3221225472,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 6442450944,
"MemorySwappiness": null,
"OomKillDisable": null,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "ba6a97f7663632c0b65e5a11595af47165321c8afb3b62eedf06b3d466307bba",
"LowerDir": "/var/lib/docker/overlay2/eef9b106872faf72f2593d957c2542a8de83c33b483a2720ec6b85b17e327301-init/diff:/var/lib/docker/overlay2/97746716e496f19c0b3fdecffe1f175c04923b8f3f05ea2a8a25747dfddb9999/diff",
"MergedDir": "/var/lib/docker/overlay2/eef9b106872faf72f2593d957c2542a8de83c33b483a2720ec6b85b17e327301/merged",
"UpperDir": "/var/lib/docker/overlay2/eef9b106872faf72f2593d957c2542a8de83c33b483a2720ec6b85b17e327301/diff",
"WorkDir": "/var/lib/docker/overlay2/eef9b106872faf72f2593d957c2542a8de83c33b483a2720ec6b85b17e327301/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "multinode-439307",
"Source": "/var/lib/docker/volumes/multinode-439307/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "multinode-439307",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "multinode-439307",
"name.minikube.sigs.k8s.io": "multinode-439307",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "dd4a1327be75cbe250d2a23b2c88f13f060fa136f90eabee1eecd426d6567242",
"SandboxKey": "/var/run/docker/netns/dd4a1327be75",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33306"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33307"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33310"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33308"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33309"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"multinode-439307": {
"IPAMConfig": {
"IPv4Address": "192.168.67.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "ae:be:98:9b:84:54",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "7e4823570a3f40e014e3b0688e11409f133ed3676e15bbaea99f537a7b7c50d6",
"EndpointID": "60d8d5339fd7e699ccfe64b7708f7ac1dbc1925b92a76d8d9fc8cbcb32a7d344",
"Gateway": "192.168.67.1",
"IPAddress": "192.168.67.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"multinode-439307",
"ba6a97f76636"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-439307 -n multinode-439307
helpers_test.go:252: <<< TestMultiNode/serial/AddNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestMultiNode/serial/AddNode]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p multinode-439307 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p multinode-439307 logs -n 25: (1.038227857s)
helpers_test.go:260: TestMultiNode/serial/AddNode logs:
-- stdout --
==> Audit <==
┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ mount-start-2-801712 ssh -- ls /minikube-host │ mount-start-2-801712 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
│ delete │ -p mount-start-1-785074 --alsologtostderr -v=5 │ mount-start-1-785074 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
│ ssh │ mount-start-2-801712 ssh -- ls /minikube-host │ mount-start-2-801712 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
│ stop │ -p mount-start-2-801712 │ mount-start-2-801712 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
│ start │ -p mount-start-2-801712 │ mount-start-2-801712 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
│ ssh │ mount-start-2-801712 ssh -- ls /minikube-host │ mount-start-2-801712 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
│ delete │ -p mount-start-2-801712 │ mount-start-2-801712 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
│ delete │ -p mount-start-1-785074 │ mount-start-1-785074 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:23 UTC │
│ start │ -p multinode-439307 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker --container-runtime=containerd │ multinode-439307 │ jenkins │ v1.37.0 │ 08 Oct 25 14:23 UTC │ 08 Oct 25 14:24 UTC │
│ kubectl │ -p multinode-439307 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml │ multinode-439307 │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
│ kubectl │ -p multinode-439307 -- rollout status deployment/busybox │ multinode-439307 │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
│ kubectl │ -p multinode-439307 -- get pods -o jsonpath='{.items[*].status.podIP}' │ multinode-439307 │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
│ kubectl │ -p multinode-439307 -- get pods -o jsonpath='{.items[*].metadata.name}' │ multinode-439307 │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-9qspn -- nslookup kubernetes.io │ multinode-439307 │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-n6rvn -- nslookup kubernetes.io │ multinode-439307 │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-9qspn -- nslookup kubernetes.default │ multinode-439307 │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-n6rvn -- nslookup kubernetes.default │ multinode-439307 │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-9qspn -- nslookup kubernetes.default.svc.cluster.local │ multinode-439307 │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-n6rvn -- nslookup kubernetes.default.svc.cluster.local │ multinode-439307 │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
│ kubectl │ -p multinode-439307 -- get pods -o jsonpath='{.items[*].metadata.name}' │ multinode-439307 │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-9qspn -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ multinode-439307 │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-9qspn -- sh -c ping -c 1 192.168.67.1 │ multinode-439307 │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-n6rvn -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ multinode-439307 │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
│ kubectl │ -p multinode-439307 -- exec busybox-7b57f96db7-n6rvn -- sh -c ping -c 1 192.168.67.1 │ multinode-439307 │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ 08 Oct 25 14:24 UTC │
│ node │ add -p multinode-439307 -v=5 --alsologtostderr │ multinode-439307 │ jenkins │ v1.37.0 │ 08 Oct 25 14:24 UTC │ │
└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/10/08 14:23:17
Running on machine: ubuntu-20-agent-14
Binary: Built with gc go1.24.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1008 14:23:17.956987 654880 out.go:360] Setting OutFile to fd 1 ...
I1008 14:23:17.957267 654880 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:23:17.957278 654880 out.go:374] Setting ErrFile to fd 2...
I1008 14:23:17.957285 654880 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:23:17.957560 654880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-513010/.minikube/bin
I1008 14:23:17.958095 654880 out.go:368] Setting JSON to false
I1008 14:23:17.959069 654880 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7547,"bootTime":1759925851,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1008 14:23:17.959183 654880 start.go:141] virtualization: kvm guest
I1008 14:23:17.961334 654880 out.go:179] * [multinode-439307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1008 14:23:17.962856 654880 out.go:179] - MINIKUBE_LOCATION=21681
I1008 14:23:17.962854 654880 notify.go:220] Checking for updates...
I1008 14:23:17.966278 654880 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1008 14:23:17.967770 654880 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21681-513010/kubeconfig
I1008 14:23:17.969198 654880 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-513010/.minikube
I1008 14:23:17.970595 654880 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1008 14:23:17.971850 654880 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1008 14:23:17.973258 654880 driver.go:421] Setting default libvirt URI to qemu:///system
I1008 14:23:17.996300 654880 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
I1008 14:23:17.996406 654880 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1008 14:23:18.050277 654880 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 14:23:18.040372301 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1008 14:23:18.050390 654880 docker.go:318] overlay module found
I1008 14:23:18.052374 654880 out.go:179] * Using the docker driver based on user configuration
I1008 14:23:18.054067 654880 start.go:305] selected driver: docker
I1008 14:23:18.054089 654880 start.go:925] validating driver "docker" against <nil>
I1008 14:23:18.054101 654880 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1008 14:23:18.054660 654880 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1008 14:23:18.107655 654880 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-08 14:23:18.098187471 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1008 14:23:18.107832 654880 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1008 14:23:18.108067 654880 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1008 14:23:18.109831 654880 out.go:179] * Using Docker driver with root privileges
I1008 14:23:18.111024 654880 cni.go:84] Creating CNI manager for ""
I1008 14:23:18.111088 654880 cni.go:136] multinode detected (0 nodes found), recommending kindnet
I1008 14:23:18.111100 654880 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1008 14:23:18.111162 654880 start.go:349] cluster config:
{Name:multinode-439307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1008 14:23:18.112399 654880 out.go:179] * Starting "multinode-439307" primary control-plane node in "multinode-439307" cluster
I1008 14:23:18.113554 654880 cache.go:123] Beginning downloading kic base image for docker with containerd
I1008 14:23:18.114910 654880 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
I1008 14:23:18.116063 654880 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1008 14:23:18.116103 654880 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
I1008 14:23:18.116106 654880 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
I1008 14:23:18.116207 654880 cache.go:58] Caching tarball of preloaded images
I1008 14:23:18.116291 654880 preload.go:233] Found /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I1008 14:23:18.116302 654880 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
I1008 14:23:18.116625 654880 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json ...
I1008 14:23:18.116652 654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json: {Name:mk22bd6f1fa53f8e3127efb61d08a257a62e2626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 14:23:18.136591 654880 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
I1008 14:23:18.136639 654880 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
I1008 14:23:18.136657 654880 cache.go:232] Successfully downloaded all kic artifacts
I1008 14:23:18.136684 654880 start.go:360] acquireMachinesLock for multinode-439307: {Name:mkf4360b9146660aeff5a4ae109e04568869fc59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1008 14:23:18.136783 654880 start.go:364] duration metric: took 81.212µs to acquireMachinesLock for "multinode-439307"
I1008 14:23:18.136807 654880 start.go:93] Provisioning new machine with config: &{Name:multinode-439307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1008 14:23:18.136878 654880 start.go:125] createHost starting for "" (driver="docker")
I1008 14:23:18.138834 654880 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1008 14:23:18.139088 654880 start.go:159] libmachine.API.Create for "multinode-439307" (driver="docker")
I1008 14:23:18.139120 654880 client.go:168] LocalClient.Create starting
I1008 14:23:18.139174 654880 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem
I1008 14:23:18.139205 654880 main.go:141] libmachine: Decoding PEM data...
I1008 14:23:18.139219 654880 main.go:141] libmachine: Parsing certificate...
I1008 14:23:18.139269 654880 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem
I1008 14:23:18.139287 654880 main.go:141] libmachine: Decoding PEM data...
I1008 14:23:18.139297 654880 main.go:141] libmachine: Parsing certificate...
I1008 14:23:18.139588 654880 cli_runner.go:164] Run: docker network inspect multinode-439307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1008 14:23:18.155901 654880 cli_runner.go:211] docker network inspect multinode-439307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1008 14:23:18.155965 654880 network_create.go:284] running [docker network inspect multinode-439307] to gather additional debugging logs...
I1008 14:23:18.156005 654880 cli_runner.go:164] Run: docker network inspect multinode-439307
W1008 14:23:18.172653 654880 cli_runner.go:211] docker network inspect multinode-439307 returned with exit code 1
I1008 14:23:18.172693 654880 network_create.go:287] error running [docker network inspect multinode-439307]: docker network inspect multinode-439307: exit status 1
stdout:
[]
stderr:
Error response from daemon: network multinode-439307 not found
I1008 14:23:18.172713 654880 network_create.go:289] output of [docker network inspect multinode-439307]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network multinode-439307 not found
** /stderr **
I1008 14:23:18.172884 654880 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1008 14:23:18.189934 654880 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-579739baec73 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:69:9e:8b:7e:c1} reservation:<nil>}
I1008 14:23:18.190282 654880 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-de056d86a4f7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e2:00:90:f6:d9:cb} reservation:<nil>}
I1008 14:23:18.190681 654880 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d36540}
I1008 14:23:18.190708 654880 network_create.go:124] attempt to create docker network multinode-439307 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I1008 14:23:18.190760 654880 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-439307 multinode-439307
I1008 14:23:18.248882 654880 network_create.go:108] docker network multinode-439307 192.168.67.0/24 created
I1008 14:23:18.248914 654880 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-439307" container
I1008 14:23:18.249056 654880 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1008 14:23:18.266495 654880 cli_runner.go:164] Run: docker volume create multinode-439307 --label name.minikube.sigs.k8s.io=multinode-439307 --label created_by.minikube.sigs.k8s.io=true
I1008 14:23:18.284793 654880 oci.go:103] Successfully created a docker volume multinode-439307
I1008 14:23:18.284903 654880 cli_runner.go:164] Run: docker run --rm --name multinode-439307-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-439307 --entrypoint /usr/bin/test -v multinode-439307:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
I1008 14:23:18.663779 654880 oci.go:107] Successfully prepared a docker volume multinode-439307
I1008 14:23:18.663869 654880 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1008 14:23:18.663894 654880 kic.go:194] Starting extracting preloaded images to volume ...
I1008 14:23:18.663972 654880 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-439307:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
I1008 14:23:23.029420 654880 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-439307:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.36536671s)
I1008 14:23:23.029457 654880 kic.go:203] duration metric: took 4.365557889s to extract preloaded images to volume ...
W1008 14:23:23.029548 654880 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W1008 14:23:23.029580 654880 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
I1008 14:23:23.029617 654880 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1008 14:23:23.086211 654880 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-439307 --name multinode-439307 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-439307 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-439307 --network multinode-439307 --ip 192.168.67.2 --volume multinode-439307:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
I1008 14:23:23.354039 654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Running}}
I1008 14:23:23.375428 654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
I1008 14:23:23.393578 654880 cli_runner.go:164] Run: docker exec multinode-439307 stat /var/lib/dpkg/alternatives/iptables
I1008 14:23:23.439666 654880 oci.go:144] the created container "multinode-439307" has a running status.
I1008 14:23:23.439697 654880 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa...
I1008 14:23:23.880004 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I1008 14:23:23.880071 654880 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1008 14:23:23.906419 654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
I1008 14:23:23.924677 654880 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1008 14:23:23.924697 654880 kic_runner.go:114] Args: [docker exec --privileged multinode-439307 chown docker:docker /home/docker/.ssh/authorized_keys]
I1008 14:23:23.977234 654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
I1008 14:23:23.994199 654880 machine.go:93] provisionDockerMachine start ...
I1008 14:23:23.994313 654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
I1008 14:23:24.011535 654880 main.go:141] libmachine: Using SSH client type: native
I1008 14:23:24.011821 654880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33306 <nil> <nil>}
I1008 14:23:24.011834 654880 main.go:141] libmachine: About to run SSH command:
hostname
I1008 14:23:24.012536 654880 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47140->127.0.0.1:33306: read: connection reset by peer
I1008 14:23:27.162380 654880 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-439307
I1008 14:23:27.162426 654880 ubuntu.go:182] provisioning hostname "multinode-439307"
I1008 14:23:27.162486 654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
I1008 14:23:27.180732 654880 main.go:141] libmachine: Using SSH client type: native
I1008 14:23:27.180972 654880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33306 <nil> <nil>}
I1008 14:23:27.181011 654880 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-439307 && echo "multinode-439307" | sudo tee /etc/hostname
I1008 14:23:27.339937 654880 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-439307
I1008 14:23:27.340069 654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
I1008 14:23:27.358403 654880 main.go:141] libmachine: Using SSH client type: native
I1008 14:23:27.358642 654880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33306 <nil> <nil>}
I1008 14:23:27.358660 654880 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-439307' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-439307/g' /etc/hosts;
else
echo '127.0.1.1 multinode-439307' | sudo tee -a /etc/hosts;
fi
fi
I1008 14:23:27.507072 654880 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1008 14:23:27.507109 654880 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-513010/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-513010/.minikube}
I1008 14:23:27.507134 654880 ubuntu.go:190] setting up certificates
I1008 14:23:27.507146 654880 provision.go:84] configureAuth start
I1008 14:23:27.507227 654880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307
I1008 14:23:27.525723 654880 provision.go:143] copyHostCerts
I1008 14:23:27.525774 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem
I1008 14:23:27.525813 654880 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem, removing ...
I1008 14:23:27.525825 654880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem
I1008 14:23:27.525916 654880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem (1123 bytes)
I1008 14:23:27.526089 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem
I1008 14:23:27.526119 654880 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem, removing ...
I1008 14:23:27.526129 654880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem
I1008 14:23:27.526175 654880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem (1675 bytes)
I1008 14:23:27.526250 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem
I1008 14:23:27.526274 654880 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem, removing ...
I1008 14:23:27.526283 654880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem
I1008 14:23:27.526323 654880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem (1078 bytes)
I1008 14:23:27.526398 654880 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem org=jenkins.multinode-439307 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-439307]
I1008 14:23:27.677124 654880 provision.go:177] copyRemoteCerts
I1008 14:23:27.677186 654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1008 14:23:27.677229 654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
I1008 14:23:27.696280 654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
I1008 14:23:27.800677 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem -> /etc/docker/server.pem
I1008 14:23:27.800760 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
I1008 14:23:27.821249 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1008 14:23:27.821317 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1008 14:23:27.839198 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1008 14:23:27.839275 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1008 14:23:27.857535 654880 provision.go:87] duration metric: took 350.370022ms to configureAuth
I1008 14:23:27.857570 654880 ubuntu.go:206] setting minikube options for container-runtime
I1008 14:23:27.857755 654880 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1008 14:23:27.857770 654880 machine.go:96] duration metric: took 3.8635448s to provisionDockerMachine
I1008 14:23:27.857780 654880 client.go:171] duration metric: took 9.718653028s to LocalClient.Create
I1008 14:23:27.857826 654880 start.go:167] duration metric: took 9.718739942s to libmachine.API.Create "multinode-439307"
I1008 14:23:27.857838 654880 start.go:293] postStartSetup for "multinode-439307" (driver="docker")
I1008 14:23:27.857849 654880 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1008 14:23:27.857921 654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1008 14:23:27.857970 654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
I1008 14:23:27.876246 654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
I1008 14:23:27.982611 654880 ssh_runner.go:195] Run: cat /etc/os-release
I1008 14:23:27.986361 654880 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1008 14:23:27.986391 654880 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1008 14:23:27.986402 654880 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-513010/.minikube/addons for local assets ...
I1008 14:23:27.986465 654880 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-513010/.minikube/files for local assets ...
I1008 14:23:27.986549 654880 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> 5167872.pem in /etc/ssl/certs
I1008 14:23:27.986586 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> /etc/ssl/certs/5167872.pem
I1008 14:23:27.986676 654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1008 14:23:27.994501 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem --> /etc/ssl/certs/5167872.pem (1708 bytes)
I1008 14:23:28.015944 654880 start.go:296] duration metric: took 158.091308ms for postStartSetup
I1008 14:23:28.016330 654880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307
I1008 14:23:28.033722 654880 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json ...
I1008 14:23:28.034024 654880 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1008 14:23:28.034069 654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
I1008 14:23:28.051472 654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
I1008 14:23:28.152696 654880 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1008 14:23:28.157578 654880 start.go:128] duration metric: took 10.02068325s to createHost
I1008 14:23:28.157607 654880 start.go:83] releasing machines lock for "multinode-439307", held for 10.020812018s
I1008 14:23:28.157686 654880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307
I1008 14:23:28.175043 654880 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1008 14:23:28.175118 654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
I1008 14:23:28.175044 654880 ssh_runner.go:195] Run: cat /version.json
I1008 14:23:28.175238 654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
I1008 14:23:28.192842 654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
I1008 14:23:28.193859 654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
I1008 14:23:28.346450 654880 ssh_runner.go:195] Run: systemctl --version
I1008 14:23:28.353340 654880 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1008 14:23:28.358122 654880 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1008 14:23:28.358188 654880 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1008 14:23:28.384439 654880 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1008 14:23:28.384464 654880 start.go:495] detecting cgroup driver to use...
I1008 14:23:28.384495 654880 detect.go:190] detected "systemd" cgroup driver on host os
I1008 14:23:28.384566 654880 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1008 14:23:28.399323 654880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1008 14:23:28.412378 654880 docker.go:218] disabling cri-docker service (if available) ...
I1008 14:23:28.412440 654880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1008 14:23:28.428847 654880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1008 14:23:28.446687 654880 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1008 14:23:28.526136 654880 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1008 14:23:28.614080 654880 docker.go:234] disabling docker service ...
I1008 14:23:28.614149 654880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1008 14:23:28.633742 654880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1008 14:23:28.647026 654880 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1008 14:23:28.727238 654880 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1008 14:23:28.808930 654880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1008 14:23:28.821761 654880 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1008 14:23:28.836040 654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1008 14:23:28.847491 654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1008 14:23:28.856854 654880 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
I1008 14:23:28.856920 654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1008 14:23:28.866133 654880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1008 14:23:28.875367 654880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1008 14:23:28.884374 654880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1008 14:23:28.893574 654880 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1008 14:23:28.902220 654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1008 14:23:28.911486 654880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1008 14:23:28.920623 654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1008 14:23:28.929996 654880 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1008 14:23:28.937926 654880 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1008 14:23:28.946203 654880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1008 14:23:29.028153 654880 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1008 14:23:29.132493 654880 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I1008 14:23:29.132559 654880 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1008 14:23:29.136824 654880 start.go:563] Will wait 60s for crictl version
I1008 14:23:29.136879 654880 ssh_runner.go:195] Run: which crictl
I1008 14:23:29.140620 654880 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1008 14:23:29.166990 654880 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.28
RuntimeApiVersion: v1
I1008 14:23:29.167069 654880 ssh_runner.go:195] Run: containerd --version
I1008 14:23:29.193758 654880 ssh_runner.go:195] Run: containerd --version
I1008 14:23:29.222040 654880 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
I1008 14:23:29.223401 654880 cli_runner.go:164] Run: docker network inspect multinode-439307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1008 14:23:29.240948 654880 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I1008 14:23:29.245849 654880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1008 14:23:29.256781 654880 kubeadm.go:883] updating cluster {Name:multinode-439307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1008 14:23:29.256900 654880 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1008 14:23:29.256945 654880 ssh_runner.go:195] Run: sudo crictl images --output json
I1008 14:23:29.282114 654880 containerd.go:627] all images are preloaded for containerd runtime.
I1008 14:23:29.282137 654880 containerd.go:534] Images already preloaded, skipping extraction
I1008 14:23:29.282188 654880 ssh_runner.go:195] Run: sudo crictl images --output json
I1008 14:23:29.306940 654880 containerd.go:627] all images are preloaded for containerd runtime.
I1008 14:23:29.306963 654880 cache_images.go:85] Images are preloaded, skipping loading
I1008 14:23:29.306971 654880 kubeadm.go:934] updating node { 192.168.67.2 8443 v1.34.1 containerd true true} ...
I1008 14:23:29.307091 654880 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-439307 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1008 14:23:29.307158 654880 ssh_runner.go:195] Run: sudo crictl info
I1008 14:23:29.333006 654880 cni.go:84] Creating CNI manager for ""
I1008 14:23:29.333038 654880 cni.go:136] multinode detected (1 nodes found), recommending kindnet
I1008 14:23:29.333058 654880 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1008 14:23:29.333091 654880 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-439307 NodeName:multinode-439307 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1008 14:23:29.333227 654880 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "multinode-439307"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.67.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1008 14:23:29.333298 654880 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1008 14:23:29.341693 654880 binaries.go:44] Found k8s binaries, skipping transfer
I1008 14:23:29.341752 654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1008 14:23:29.350015 654880 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
I1008 14:23:29.363305 654880 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1008 14:23:29.379485 654880 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
I1008 14:23:29.392555 654880 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I1008 14:23:29.396398 654880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1008 14:23:29.406631 654880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1008 14:23:29.483438 654880 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1008 14:23:29.509514 654880 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307 for IP: 192.168.67.2
I1008 14:23:29.509542 654880 certs.go:195] generating shared ca certs ...
I1008 14:23:29.509563 654880 certs.go:227] acquiring lock for ca certs: {Name:mk57aa9b2383fcc0908491da1ce926c707ff69a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 14:23:29.509734 654880 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key
I1008 14:23:29.509788 654880 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key
I1008 14:23:29.509802 654880 certs.go:257] generating profile certs ...
I1008 14:23:29.509910 654880 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key
I1008 14:23:29.509939 654880 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt with IP's: []
I1008 14:23:29.610645 654880 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt ...
I1008 14:23:29.610679 654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt: {Name:mkf1a19119257c35c0be4630341107abefe0712a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 14:23:29.610870 654880 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key ...
I1008 14:23:29.610891 654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key: {Name:mk49a676c10aed18805a93ab7df3049b7dcfa5b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 14:23:29.610988 654880 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.key.4f7cecc8
I1008 14:23:29.611006 654880 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.crt.4f7cecc8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.67.2]
I1008 14:23:29.809665 654880 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.crt.4f7cecc8 ...
I1008 14:23:29.809701 654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.crt.4f7cecc8: {Name:mk049ea208d229fa055039856d3579ebb9e0840d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 14:23:29.809887 654880 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.key.4f7cecc8 ...
I1008 14:23:29.809902 654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.key.4f7cecc8: {Name:mkbbd81466b2cdd0cb264ee782d6df895a6557f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 14:23:29.809991 654880 certs.go:382] copying /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.crt.4f7cecc8 -> /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.crt
I1008 14:23:29.810098 654880 certs.go:386] copying /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.key.4f7cecc8 -> /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.key
I1008 14:23:29.810163 654880 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.key
I1008 14:23:29.810178 654880 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.crt with IP's: []
I1008 14:23:30.434846 654880 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.crt ...
I1008 14:23:30.434880 654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.crt: {Name:mk74033eb7b0061c1da9d5a1860ee35ec43567a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 14:23:30.435058 654880 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.key ...
I1008 14:23:30.435073 654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.key: {Name:mkb2b7339b2c5bc4801b86127d693ce13ee35f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 14:23:30.435152 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1008 14:23:30.435180 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1008 14:23:30.435191 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1008 14:23:30.435204 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1008 14:23:30.435216 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1008 14:23:30.435226 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1008 14:23:30.435239 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1008 14:23:30.435249 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1008 14:23:30.435302 654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem (1338 bytes)
W1008 14:23:30.435341 654880 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787_empty.pem, impossibly tiny 0 bytes
I1008 14:23:30.435351 654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem (1675 bytes)
I1008 14:23:30.435377 654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem (1078 bytes)
I1008 14:23:30.435399 654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem (1123 bytes)
I1008 14:23:30.435419 654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem (1675 bytes)
I1008 14:23:30.435456 654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem (1708 bytes)
I1008 14:23:30.435480 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1008 14:23:30.435493 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem -> /usr/share/ca-certificates/516787.pem
I1008 14:23:30.435505 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> /usr/share/ca-certificates/5167872.pem
I1008 14:23:30.436154 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1008 14:23:30.454787 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1008 14:23:30.472361 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1008 14:23:30.489956 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1008 14:23:30.507583 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1008 14:23:30.525415 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1008 14:23:30.543120 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1008 14:23:30.560854 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1008 14:23:30.578730 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1008 14:23:30.599796 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem --> /usr/share/ca-certificates/516787.pem (1338 bytes)
I1008 14:23:30.617312 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem --> /usr/share/ca-certificates/5167872.pem (1708 bytes)
I1008 14:23:30.635626 654880 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1008 14:23:30.648875 654880 ssh_runner.go:195] Run: openssl version
I1008 14:23:30.655674 654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516787.pem && ln -fs /usr/share/ca-certificates/516787.pem /etc/ssl/certs/516787.pem"
I1008 14:23:30.664582 654880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516787.pem
I1008 14:23:30.668786 654880 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 8 14:09 /usr/share/ca-certificates/516787.pem
I1008 14:23:30.668853 654880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516787.pem
I1008 14:23:30.703803 654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516787.pem /etc/ssl/certs/51391683.0"
I1008 14:23:30.713696 654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5167872.pem && ln -fs /usr/share/ca-certificates/5167872.pem /etc/ssl/certs/5167872.pem"
I1008 14:23:30.722925 654880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5167872.pem
I1008 14:23:30.726802 654880 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 8 14:09 /usr/share/ca-certificates/5167872.pem
I1008 14:23:30.726862 654880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5167872.pem
I1008 14:23:30.760940 654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5167872.pem /etc/ssl/certs/3ec20f2e.0"
I1008 14:23:30.770017 654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1008 14:23:30.778517 654880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1008 14:23:30.782405 654880 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 8 14:03 /usr/share/ca-certificates/minikubeCA.pem
I1008 14:23:30.782465 654880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1008 14:23:30.816706 654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1008 14:23:30.825787 654880 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1008 14:23:30.829676 654880 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1008 14:23:30.829741 654880 kubeadm.go:400] StartCluster: {Name:multinode-439307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1008 14:23:30.829825 654880 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1008 14:23:30.829872 654880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1008 14:23:30.857015 654880 cri.go:89] found id: ""
I1008 14:23:30.857078 654880 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1008 14:23:30.865318 654880 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1008 14:23:30.873182 654880 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I1008 14:23:30.873235 654880 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1008 14:23:30.880797 654880 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1008 14:23:30.880817 654880 kubeadm.go:157] found existing configuration files:
I1008 14:23:30.880879 654880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1008 14:23:30.888347 654880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1008 14:23:30.888425 654880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1008 14:23:30.895504 654880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1008 14:23:30.903314 654880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1008 14:23:30.903371 654880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1008 14:23:30.911037 654880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1008 14:23:30.918990 654880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1008 14:23:30.919046 654880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1008 14:23:30.927124 654880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1008 14:23:30.935194 654880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1008 14:23:30.935282 654880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1008 14:23:30.943051 654880 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1008 14:23:31.011073 654880 kubeadm.go:318] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
I1008 14:23:31.072669 654880 kubeadm.go:318] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1008 14:23:42.013295 654880 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
I1008 14:23:42.013386 654880 kubeadm.go:318] [preflight] Running pre-flight checks
I1008 14:23:42.013526 654880 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
I1008 14:23:42.013610 654880 kubeadm.go:318] [0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
I1008 14:23:42.013681 654880 kubeadm.go:318] [0;37mOS[0m: [0;32mLinux[0m
I1008 14:23:42.013738 654880 kubeadm.go:318] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1008 14:23:42.013787 654880 kubeadm.go:318] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1008 14:23:42.013830 654880 kubeadm.go:318] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1008 14:23:42.013874 654880 kubeadm.go:318] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1008 14:23:42.013925 654880 kubeadm.go:318] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1008 14:23:42.014006 654880 kubeadm.go:318] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1008 14:23:42.014054 654880 kubeadm.go:318] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1008 14:23:42.014092 654880 kubeadm.go:318] [0;37mCGROUPS_IO[0m: [0;32menabled[0m
I1008 14:23:42.014187 654880 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
I1008 14:23:42.014301 654880 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1008 14:23:42.014382 654880 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1008 14:23:42.014436 654880 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1008 14:23:42.015973 654880 out.go:252] - Generating certificates and keys ...
I1008 14:23:42.016057 654880 kubeadm.go:318] [certs] Using existing ca certificate authority
I1008 14:23:42.016112 654880 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
I1008 14:23:42.016189 654880 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
I1008 14:23:42.016266 654880 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
I1008 14:23:42.016339 654880 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
I1008 14:23:42.016411 654880 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
I1008 14:23:42.016496 654880 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
I1008 14:23:42.016630 654880 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-439307] and IPs [192.168.67.2 127.0.0.1 ::1]
I1008 14:23:42.016681 654880 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
I1008 14:23:42.016787 654880 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-439307] and IPs [192.168.67.2 127.0.0.1 ::1]
I1008 14:23:42.016843 654880 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
I1008 14:23:42.016903 654880 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
I1008 14:23:42.016945 654880 kubeadm.go:318] [certs] Generating "sa" key and public key
I1008 14:23:42.017040 654880 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1008 14:23:42.017097 654880 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
I1008 14:23:42.017144 654880 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1008 14:23:42.017213 654880 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1008 14:23:42.017286 654880 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1008 14:23:42.017348 654880 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1008 14:23:42.017478 654880 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1008 14:23:42.017571 654880 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1008 14:23:42.019103 654880 out.go:252] - Booting up control plane ...
I1008 14:23:42.019195 654880 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1008 14:23:42.019290 654880 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1008 14:23:42.019381 654880 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1008 14:23:42.019498 654880 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1008 14:23:42.019651 654880 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1008 14:23:42.019758 654880 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1008 14:23:42.019874 654880 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1008 14:23:42.019923 654880 kubeadm.go:318] [kubelet-start] Starting the kubelet
I1008 14:23:42.020112 654880 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1008 14:23:42.020255 654880 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1008 14:23:42.020363 654880 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.925821ms
I1008 14:23:42.020445 654880 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1008 14:23:42.020510 654880 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.67.2:8443/livez
I1008 14:23:42.020603 654880 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1008 14:23:42.020682 654880 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1008 14:23:42.020747 654880 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.97144594s
I1008 14:23:42.020832 654880 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.662946335s
I1008 14:23:42.020919 654880 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501501466s
I1008 14:23:42.021101 654880 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1008 14:23:42.021289 654880 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1008 14:23:42.021368 654880 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
I1008 14:23:42.021621 654880 kubeadm.go:318] [mark-control-plane] Marking the node multinode-439307 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1008 14:23:42.021687 654880 kubeadm.go:318] [bootstrap-token] Using token: i5r6w0.sj0dfahq56oi5osn
I1008 14:23:42.023115 654880 out.go:252] - Configuring RBAC rules ...
I1008 14:23:42.023282 654880 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1008 14:23:42.023409 654880 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1008 14:23:42.023542 654880 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1008 14:23:42.023709 654880 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1008 14:23:42.023851 654880 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1008 14:23:42.023949 654880 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1008 14:23:42.024072 654880 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1008 14:23:42.024109 654880 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
I1008 14:23:42.024148 654880 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
I1008 14:23:42.024154 654880 kubeadm.go:318]
I1008 14:23:42.024215 654880 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
I1008 14:23:42.024224 654880 kubeadm.go:318]
I1008 14:23:42.024309 654880 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
I1008 14:23:42.024319 654880 kubeadm.go:318]
I1008 14:23:42.024361 654880 kubeadm.go:318] mkdir -p $HOME/.kube
I1008 14:23:42.024433 654880 kubeadm.go:318] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1008 14:23:42.024475 654880 kubeadm.go:318] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1008 14:23:42.024485 654880 kubeadm.go:318]
I1008 14:23:42.024537 654880 kubeadm.go:318] Alternatively, if you are the root user, you can run:
I1008 14:23:42.024543 654880 kubeadm.go:318]
I1008 14:23:42.024588 654880 kubeadm.go:318] export KUBECONFIG=/etc/kubernetes/admin.conf
I1008 14:23:42.024595 654880 kubeadm.go:318]
I1008 14:23:42.024647 654880 kubeadm.go:318] You should now deploy a pod network to the cluster.
I1008 14:23:42.024727 654880 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1008 14:23:42.024793 654880 kubeadm.go:318] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1008 14:23:42.024806 654880 kubeadm.go:318]
I1008 14:23:42.024904 654880 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
I1008 14:23:42.025017 654880 kubeadm.go:318] and service account keys on each node and then running the following as root:
I1008 14:23:42.025034 654880 kubeadm.go:318]
I1008 14:23:42.025112 654880 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token i5r6w0.sj0dfahq56oi5osn \
I1008 14:23:42.025201 654880 kubeadm.go:318] --discovery-token-ca-cert-hash sha256:29d08006b7495c78b5f27aaa9701a82f373226e18456de05d156e89bccfbd06f \
I1008 14:23:42.025232 654880 kubeadm.go:318] --control-plane
I1008 14:23:42.025242 654880 kubeadm.go:318]
I1008 14:23:42.025327 654880 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
I1008 14:23:42.025334 654880 kubeadm.go:318]
I1008 14:23:42.025424 654880 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token i5r6w0.sj0dfahq56oi5osn \
I1008 14:23:42.025535 654880 kubeadm.go:318] --discovery-token-ca-cert-hash sha256:29d08006b7495c78b5f27aaa9701a82f373226e18456de05d156e89bccfbd06f
I1008 14:23:42.025548 654880 cni.go:84] Creating CNI manager for ""
I1008 14:23:42.025554 654880 cni.go:136] multinode detected (1 nodes found), recommending kindnet
I1008 14:23:42.027007 654880 out.go:179] * Configuring CNI (Container Networking Interface) ...
I1008 14:23:42.028122 654880 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I1008 14:23:42.033376 654880 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
I1008 14:23:42.033399 654880 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
I1008 14:23:42.047336 654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1008 14:23:42.257680 654880 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1008 14:23:42.257777 654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1008 14:23:42.257788 654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-439307 minikube.k8s.io/updated_at=2025_10_08T14_23_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=multinode-439307 minikube.k8s.io/primary=true
I1008 14:23:42.267920 654880 ops.go:34] apiserver oom_adj: -16
I1008 14:23:42.333752 654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1008 14:23:42.834103 654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1008 14:23:43.334513 654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1008 14:23:43.834031 654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1008 14:23:44.334151 654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1008 14:23:44.834515 654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1008 14:23:45.334213 654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1008 14:23:45.834573 654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1008 14:23:46.334831 654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1008 14:23:46.833924 654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1008 14:23:46.910005 654880 kubeadm.go:1113] duration metric: took 4.652297133s to wait for elevateKubeSystemPrivileges
I1008 14:23:46.910044 654880 kubeadm.go:402] duration metric: took 16.080310474s to StartCluster
I1008 14:23:46.910065 654880 settings.go:142] acquiring lock: {Name:mk8e4c0f084ac2281293848ef8bd3096692e3417 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 14:23:46.910151 654880 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21681-513010/kubeconfig
I1008 14:23:46.910878 654880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-513010/kubeconfig: {Name:mk629eb0239182a6659e3d616a150e5234772a5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 14:23:46.911151 654880 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1008 14:23:46.911192 654880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1008 14:23:46.911219 654880 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1008 14:23:46.911355 654880 addons.go:69] Setting storage-provisioner=true in profile "multinode-439307"
I1008 14:23:46.911395 654880 addons.go:238] Setting addon storage-provisioner=true in "multinode-439307"
I1008 14:23:46.911396 654880 addons.go:69] Setting default-storageclass=true in profile "multinode-439307"
I1008 14:23:46.911426 654880 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-439307"
I1008 14:23:46.911435 654880 host.go:66] Checking if "multinode-439307" exists ...
I1008 14:23:46.911401 654880 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1008 14:23:46.911826 654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
I1008 14:23:46.912016 654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
I1008 14:23:46.912657 654880 out.go:179] * Verifying Kubernetes components...
I1008 14:23:46.917571 654880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1008 14:23:46.938275 654880 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1008 14:23:46.938669 654880 kapi.go:59] client config for multinode-439307: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key", CAFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1008 14:23:46.939674 654880 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1008 14:23:46.939699 654880 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1008 14:23:46.939706 654880 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1008 14:23:46.939712 654880 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1008 14:23:46.939717 654880 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1008 14:23:46.939726 654880 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
I1008 14:23:46.940295 654880 addons.go:238] Setting addon default-storageclass=true in "multinode-439307"
I1008 14:23:46.940373 654880 host.go:66] Checking if "multinode-439307" exists ...
I1008 14:23:46.940553 654880 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1008 14:23:46.940574 654880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1008 14:23:46.940644 654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
I1008 14:23:46.940902 654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
I1008 14:23:46.975775 654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
I1008 14:23:46.977730 654880 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I1008 14:23:46.977762 654880 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1008 14:23:46.977823 654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
I1008 14:23:47.019509 654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
I1008 14:23:47.062130 654880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.67.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1008 14:23:47.114509 654880 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1008 14:23:47.131279 654880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1008 14:23:47.146387 654880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1008 14:23:47.235407 654880 start.go:976] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
I1008 14:23:47.236068 654880 kapi.go:59] client config for multinode-439307: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key", CAFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1008 14:23:47.236068 654880 kapi.go:59] client config for multinode-439307: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key", CAFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1008 14:23:47.236491 654880 node_ready.go:35] waiting up to 6m0s for node "multinode-439307" to be "Ready" ...
I1008 14:23:47.437485 654880 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
I1008 14:23:47.438409 654880 addons.go:514] duration metric: took 527.189163ms for enable addons: enabled=[storage-provisioner default-storageclass]
I1008 14:23:47.740068 654880 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-439307" context rescaled to 1 replicas
W1008 14:23:49.240140 654880 node_ready.go:57] node "multinode-439307" has "Ready":"False" status (will retry)
W1008 14:23:51.240341 654880 node_ready.go:57] node "multinode-439307" has "Ready":"False" status (will retry)
W1008 14:23:53.740468 654880 node_ready.go:57] node "multinode-439307" has "Ready":"False" status (will retry)
W1008 14:23:55.740674 654880 node_ready.go:57] node "multinode-439307" has "Ready":"False" status (will retry)
I1008 14:23:58.240406 654880 node_ready.go:49] node "multinode-439307" is "Ready"
I1008 14:23:58.240442 654880 node_ready.go:38] duration metric: took 11.003905737s for node "multinode-439307" to be "Ready" ...
I1008 14:23:58.240462 654880 api_server.go:52] waiting for apiserver process to appear ...
I1008 14:23:58.240528 654880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1008 14:23:58.256864 654880 api_server.go:72] duration metric: took 11.345663766s to wait for apiserver process to appear ...
I1008 14:23:58.256909 654880 api_server.go:88] waiting for apiserver healthz status ...
I1008 14:23:58.256937 654880 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I1008 14:23:58.261705 654880 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
ok
I1008 14:23:58.262918 654880 api_server.go:141] control plane version: v1.34.1
I1008 14:23:58.262945 654880 api_server.go:131] duration metric: took 6.028377ms to wait for apiserver health ...
I1008 14:23:58.262956 654880 system_pods.go:43] waiting for kube-system pods to appear ...
I1008 14:23:58.267800 654880 system_pods.go:59] 8 kube-system pods found
I1008 14:23:58.267853 654880 system_pods.go:61] "coredns-66bc5c9577-llvkc" [a445b5ef-8d30-4b7c-a40f-77f2a9072e7f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1008 14:23:58.267870 654880 system_pods.go:61] "etcd-multinode-439307" [1989112b-ab3b-4883-9f2c-19ee41565704] Running
I1008 14:23:58.267878 654880 system_pods.go:61] "kindnet-l6pqj" [fea0f284-17d4-438c-91a6-14831ce6ce5c] Running
I1008 14:23:58.267884 654880 system_pods.go:61] "kube-apiserver-multinode-439307" [18f77e80-010e-4779-9028-6093a55219c5] Running
I1008 14:23:58.267889 654880 system_pods.go:61] "kube-controller-manager-multinode-439307" [f4954c96-43a5-408b-a99e-423ab197e112] Running
I1008 14:23:58.267903 654880 system_pods.go:61] "kube-proxy-sjzfx" [1211872c-1472-435c-a117-2656ba2fca8e] Running
I1008 14:23:58.267908 654880 system_pods.go:61] "kube-scheduler-multinode-439307" [a940c86a-bd75-4100-86f5-0b6a53040f2b] Running
I1008 14:23:58.267914 654880 system_pods.go:61] "storage-provisioner" [e1d410c3-de2a-4e2a-88c1-93970ce8b254] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1008 14:23:58.267923 654880 system_pods.go:74] duration metric: took 4.960123ms to wait for pod list to return data ...
I1008 14:23:58.267935 654880 default_sa.go:34] waiting for default service account to be created ...
I1008 14:23:58.270747 654880 default_sa.go:45] found service account: "default"
I1008 14:23:58.270770 654880 default_sa.go:55] duration metric: took 2.828587ms for default service account to be created ...
I1008 14:23:58.270784 654880 system_pods.go:116] waiting for k8s-apps to be running ...
I1008 14:23:58.273850 654880 system_pods.go:86] 8 kube-system pods found
I1008 14:23:58.273881 654880 system_pods.go:89] "coredns-66bc5c9577-llvkc" [a445b5ef-8d30-4b7c-a40f-77f2a9072e7f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1008 14:23:58.273886 654880 system_pods.go:89] "etcd-multinode-439307" [1989112b-ab3b-4883-9f2c-19ee41565704] Running
I1008 14:23:58.273892 654880 system_pods.go:89] "kindnet-l6pqj" [fea0f284-17d4-438c-91a6-14831ce6ce5c] Running
I1008 14:23:58.273896 654880 system_pods.go:89] "kube-apiserver-multinode-439307" [18f77e80-010e-4779-9028-6093a55219c5] Running
I1008 14:23:58.273899 654880 system_pods.go:89] "kube-controller-manager-multinode-439307" [f4954c96-43a5-408b-a99e-423ab197e112] Running
I1008 14:23:58.273903 654880 system_pods.go:89] "kube-proxy-sjzfx" [1211872c-1472-435c-a117-2656ba2fca8e] Running
I1008 14:23:58.273911 654880 system_pods.go:89] "kube-scheduler-multinode-439307" [a940c86a-bd75-4100-86f5-0b6a53040f2b] Running
I1008 14:23:58.273916 654880 system_pods.go:89] "storage-provisioner" [e1d410c3-de2a-4e2a-88c1-93970ce8b254] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1008 14:23:58.273944 654880 retry.go:31] will retry after 204.950572ms: missing components: kube-dns
I1008 14:23:58.483515 654880 system_pods.go:86] 8 kube-system pods found
I1008 14:23:58.483557 654880 system_pods.go:89] "coredns-66bc5c9577-llvkc" [a445b5ef-8d30-4b7c-a40f-77f2a9072e7f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1008 14:23:58.483566 654880 system_pods.go:89] "etcd-multinode-439307" [1989112b-ab3b-4883-9f2c-19ee41565704] Running
I1008 14:23:58.483573 654880 system_pods.go:89] "kindnet-l6pqj" [fea0f284-17d4-438c-91a6-14831ce6ce5c] Running
I1008 14:23:58.483577 654880 system_pods.go:89] "kube-apiserver-multinode-439307" [18f77e80-010e-4779-9028-6093a55219c5] Running
I1008 14:23:58.483581 654880 system_pods.go:89] "kube-controller-manager-multinode-439307" [f4954c96-43a5-408b-a99e-423ab197e112] Running
I1008 14:23:58.483586 654880 system_pods.go:89] "kube-proxy-sjzfx" [1211872c-1472-435c-a117-2656ba2fca8e] Running
I1008 14:23:58.483591 654880 system_pods.go:89] "kube-scheduler-multinode-439307" [a940c86a-bd75-4100-86f5-0b6a53040f2b] Running
I1008 14:23:58.483605 654880 system_pods.go:89] "storage-provisioner" [e1d410c3-de2a-4e2a-88c1-93970ce8b254] Running
I1008 14:23:58.483625 654880 system_pods.go:126] duration metric: took 212.832591ms to wait for k8s-apps to be running ...
I1008 14:23:58.483639 654880 system_svc.go:44] waiting for kubelet service to be running ....
I1008 14:23:58.483696 654880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1008 14:23:58.497700 654880 system_svc.go:56] duration metric: took 14.052432ms WaitForService to wait for kubelet
I1008 14:23:58.497735 654880 kubeadm.go:586] duration metric: took 11.586544695s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1008 14:23:58.497762 654880 node_conditions.go:102] verifying NodePressure condition ...
I1008 14:23:58.501151 654880 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1008 14:23:58.501216 654880 node_conditions.go:123] node cpu capacity is 8
I1008 14:23:58.501243 654880 node_conditions.go:105] duration metric: took 3.474604ms to run NodePressure ...
I1008 14:23:58.501258 654880 start.go:241] waiting for startup goroutines ...
I1008 14:23:58.501268 654880 start.go:246] waiting for cluster config update ...
I1008 14:23:58.501283 654880 start.go:255] writing updated cluster config ...
I1008 14:23:58.503410 654880 out.go:203]
I1008 14:23:58.504758 654880 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1008 14:23:58.504834 654880 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json ...
I1008 14:23:58.506429 654880 out.go:179] * Starting "multinode-439307-m02" worker node in "multinode-439307" cluster
I1008 14:23:58.508117 654880 cache.go:123] Beginning downloading kic base image for docker with containerd
I1008 14:23:58.509438 654880 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
I1008 14:23:58.510664 654880 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1008 14:23:58.510689 654880 cache.go:58] Caching tarball of preloaded images
I1008 14:23:58.510780 654880 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
I1008 14:23:58.510807 654880 preload.go:233] Found /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I1008 14:23:58.510816 654880 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
I1008 14:23:58.510889 654880 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json ...
I1008 14:23:58.532250 654880 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
I1008 14:23:58.532275 654880 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
I1008 14:23:58.532296 654880 cache.go:232] Successfully downloaded all kic artifacts
I1008 14:23:58.532333 654880 start.go:360] acquireMachinesLock for multinode-439307-m02: {Name:mkd110918dd178f7f1251cdb6cbe49ec290497a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1008 14:23:58.532447 654880 start.go:364] duration metric: took 91.76µs to acquireMachinesLock for "multinode-439307-m02"
I1008 14:23:58.532478 654880 start.go:93] Provisioning new machine with config: &{Name:multinode-439307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:false Worker:true}
I1008 14:23:58.532562 654880 start.go:125] createHost starting for "m02" (driver="docker")
I1008 14:23:58.535151 654880 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1008 14:23:58.535282 654880 start.go:159] libmachine.API.Create for "multinode-439307" (driver="docker")
I1008 14:23:58.535317 654880 client.go:168] LocalClient.Create starting
I1008 14:23:58.535405 654880 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem
I1008 14:23:58.535446 654880 main.go:141] libmachine: Decoding PEM data...
I1008 14:23:58.535467 654880 main.go:141] libmachine: Parsing certificate...
I1008 14:23:58.535539 654880 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem
I1008 14:23:58.535570 654880 main.go:141] libmachine: Decoding PEM data...
I1008 14:23:58.535600 654880 main.go:141] libmachine: Parsing certificate...
I1008 14:23:58.535837 654880 cli_runner.go:164] Run: docker network inspect multinode-439307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1008 14:23:58.553063 654880 network_create.go:77] Found existing network {name:multinode-439307 subnet:0xc00096a0f0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
I1008 14:23:58.553121 654880 kic.go:121] calculated static IP "192.168.67.3" for the "multinode-439307-m02" container
I1008 14:23:58.553194 654880 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1008 14:23:58.571642 654880 cli_runner.go:164] Run: docker volume create multinode-439307-m02 --label name.minikube.sigs.k8s.io=multinode-439307-m02 --label created_by.minikube.sigs.k8s.io=true
I1008 14:23:58.590094 654880 oci.go:103] Successfully created a docker volume multinode-439307-m02
I1008 14:23:58.590216 654880 cli_runner.go:164] Run: docker run --rm --name multinode-439307-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-439307-m02 --entrypoint /usr/bin/test -v multinode-439307-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
I1008 14:23:58.980132 654880 oci.go:107] Successfully prepared a docker volume multinode-439307-m02
I1008 14:23:58.980183 654880 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1008 14:23:58.980210 654880 kic.go:194] Starting extracting preloaded images to volume ...
I1008 14:23:58.980284 654880 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-439307-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
I1008 14:24:03.452942 654880 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-513010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-439307-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.472598209s)
I1008 14:24:03.452997 654880 kic.go:203] duration metric: took 4.472765246s to extract preloaded images to volume ...
W1008 14:24:03.453098 654880 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W1008 14:24:03.453135 654880 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
I1008 14:24:03.453189 654880 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1008 14:24:03.514279 654880 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-439307-m02 --name multinode-439307-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-439307-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-439307-m02 --network multinode-439307 --ip 192.168.67.3 --volume multinode-439307-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
I1008 14:24:03.806322 654880 cli_runner.go:164] Run: docker container inspect multinode-439307-m02 --format={{.State.Running}}
I1008 14:24:03.825192 654880 cli_runner.go:164] Run: docker container inspect multinode-439307-m02 --format={{.State.Status}}
I1008 14:24:03.843451 654880 cli_runner.go:164] Run: docker exec multinode-439307-m02 stat /var/lib/dpkg/alternatives/iptables
I1008 14:24:03.887312 654880 oci.go:144] the created container "multinode-439307-m02" has a running status.
I1008 14:24:03.887351 654880 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa...
I1008 14:24:03.981880 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I1008 14:24:03.981940 654880 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1008 14:24:04.008560 654880 cli_runner.go:164] Run: docker container inspect multinode-439307-m02 --format={{.State.Status}}
I1008 14:24:04.028620 654880 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1008 14:24:04.028641 654880 kic_runner.go:114] Args: [docker exec --privileged multinode-439307-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
I1008 14:24:04.085475 654880 cli_runner.go:164] Run: docker container inspect multinode-439307-m02 --format={{.State.Status}}
I1008 14:24:04.104162 654880 machine.go:93] provisionDockerMachine start ...
I1008 14:24:04.104268 654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
I1008 14:24:04.125664 654880 main.go:141] libmachine: Using SSH client type: native
I1008 14:24:04.126030 654880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33311 <nil> <nil>}
I1008 14:24:04.126052 654880 main.go:141] libmachine: About to run SSH command:
hostname
I1008 14:24:04.126862 654880 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34090->127.0.0.1:33311: read: connection reset by peer
I1008 14:24:07.275164 654880 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-439307-m02
I1008 14:24:07.275197 654880 ubuntu.go:182] provisioning hostname "multinode-439307-m02"
I1008 14:24:07.275268 654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
I1008 14:24:07.293538 654880 main.go:141] libmachine: Using SSH client type: native
I1008 14:24:07.293764 654880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33311 <nil> <nil>}
I1008 14:24:07.293777 654880 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-439307-m02 && echo "multinode-439307-m02" | sudo tee /etc/hostname
I1008 14:24:07.452309 654880 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-439307-m02
I1008 14:24:07.452395 654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
I1008 14:24:07.470682 654880 main.go:141] libmachine: Using SSH client type: native
I1008 14:24:07.470904 654880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33311 <nil> <nil>}
I1008 14:24:07.470926 654880 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-439307-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-439307-m02/g' /etc/hosts;
else
echo '127.0.1.1 multinode-439307-m02' | sudo tee -a /etc/hosts;
fi
fi
I1008 14:24:07.619123 654880 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1008 14:24:07.619159 654880 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-513010/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-513010/.minikube}
I1008 14:24:07.619176 654880 ubuntu.go:190] setting up certificates
I1008 14:24:07.619189 654880 provision.go:84] configureAuth start
I1008 14:24:07.619267 654880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307-m02
I1008 14:24:07.636645 654880 provision.go:143] copyHostCerts
I1008 14:24:07.636697 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem
I1008 14:24:07.636734 654880 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem, removing ...
I1008 14:24:07.636744 654880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem
I1008 14:24:07.636809 654880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/ca.pem (1078 bytes)
I1008 14:24:07.636900 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem
I1008 14:24:07.636921 654880 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem, removing ...
I1008 14:24:07.636925 654880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem
I1008 14:24:07.636953 654880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/cert.pem (1123 bytes)
I1008 14:24:07.637030 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem
I1008 14:24:07.637053 654880 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem, removing ...
I1008 14:24:07.637061 654880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem
I1008 14:24:07.637088 654880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-513010/.minikube/key.pem (1675 bytes)
I1008 14:24:07.637144 654880 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem org=jenkins.multinode-439307-m02 san=[127.0.0.1 192.168.67.3 localhost minikube multinode-439307-m02]
I1008 14:24:07.912616 654880 provision.go:177] copyRemoteCerts
I1008 14:24:07.912701 654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1008 14:24:07.912746 654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
I1008 14:24:07.930775 654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa Username:docker}
I1008 14:24:08.036822 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1008 14:24:08.036899 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1008 14:24:08.057016 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem -> /etc/docker/server.pem
I1008 14:24:08.057099 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
I1008 14:24:08.075825 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1008 14:24:08.075887 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1008 14:24:08.094562 654880 provision.go:87] duration metric: took 475.356058ms to configureAuth
I1008 14:24:08.094595 654880 ubuntu.go:206] setting minikube options for container-runtime
I1008 14:24:08.094805 654880 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1008 14:24:08.094818 654880 machine.go:96] duration metric: took 3.990634645s to provisionDockerMachine
I1008 14:24:08.094825 654880 client.go:171] duration metric: took 9.55949919s to LocalClient.Create
I1008 14:24:08.094846 654880 start.go:167] duration metric: took 9.559564892s to libmachine.API.Create "multinode-439307"
I1008 14:24:08.094856 654880 start.go:293] postStartSetup for "multinode-439307-m02" (driver="docker")
I1008 14:24:08.094864 654880 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1008 14:24:08.094910 654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1008 14:24:08.094953 654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
I1008 14:24:08.112924 654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa Username:docker}
I1008 14:24:08.218693 654880 ssh_runner.go:195] Run: cat /etc/os-release
I1008 14:24:08.222553 654880 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1008 14:24:08.222590 654880 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1008 14:24:08.222601 654880 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-513010/.minikube/addons for local assets ...
I1008 14:24:08.222660 654880 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-513010/.minikube/files for local assets ...
I1008 14:24:08.222816 654880 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> 5167872.pem in /etc/ssl/certs
I1008 14:24:08.222833 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> /etc/ssl/certs/5167872.pem
I1008 14:24:08.222964 654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1008 14:24:08.231254 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem --> /etc/ssl/certs/5167872.pem (1708 bytes)
I1008 14:24:08.252383 654880 start.go:296] duration metric: took 157.508647ms for postStartSetup
I1008 14:24:08.252769 654880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307-m02
I1008 14:24:08.270607 654880 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/config.json ...
I1008 14:24:08.270881 654880 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1008 14:24:08.270929 654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
I1008 14:24:08.288967 654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa Username:docker}
I1008 14:24:08.390387 654880 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1008 14:24:08.395431 654880 start.go:128] duration metric: took 9.862849739s to createHost
I1008 14:24:08.395464 654880 start.go:83] releasing machines lock for "multinode-439307-m02", held for 9.863003309s
I1008 14:24:08.395547 654880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-439307-m02
I1008 14:24:08.415924 654880 out.go:179] * Found network options:
I1008 14:24:08.417255 654880 out.go:179] - NO_PROXY=192.168.67.2
W1008 14:24:08.418465 654880 proxy.go:120] fail to check proxy env: Error ip not in block
W1008 14:24:08.418511 654880 proxy.go:120] fail to check proxy env: Error ip not in block
I1008 14:24:08.418612 654880 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1008 14:24:08.418625 654880 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1008 14:24:08.418653 654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
I1008 14:24:08.418693 654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307-m02
I1008 14:24:08.439832 654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa Username:docker}
I1008 14:24:08.440289 654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307-m02/id_rsa Username:docker}
W1008 14:24:08.596782 654880 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1008 14:24:08.596862 654880 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1008 14:24:08.623270 654880 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1008 14:24:08.623295 654880 start.go:495] detecting cgroup driver to use...
I1008 14:24:08.623333 654880 detect.go:190] detected "systemd" cgroup driver on host os
I1008 14:24:08.623386 654880 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1008 14:24:08.638627 654880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1008 14:24:08.651897 654880 docker.go:218] disabling cri-docker service (if available) ...
I1008 14:24:08.651966 654880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1008 14:24:08.670277 654880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1008 14:24:08.688725 654880 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1008 14:24:08.771633 654880 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1008 14:24:08.860938 654880 docker.go:234] disabling docker service ...
I1008 14:24:08.861030 654880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1008 14:24:08.880549 654880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1008 14:24:08.894395 654880 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1008 14:24:08.979782 654880 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1008 14:24:09.065757 654880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1008 14:24:09.079136 654880 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1008 14:24:09.095338 654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1008 14:24:09.107275 654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1008 14:24:09.117636 654880 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
I1008 14:24:09.117701 654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1008 14:24:09.127943 654880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1008 14:24:09.138714 654880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1008 14:24:09.148727 654880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1008 14:24:09.158882 654880 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1008 14:24:09.168295 654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1008 14:24:09.178665 654880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1008 14:24:09.188393 654880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1008 14:24:09.198424 654880 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1008 14:24:09.206454 654880 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1008 14:24:09.215144 654880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1008 14:24:09.294927 654880 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1008 14:24:09.407140 654880 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I1008 14:24:09.407220 654880 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1008 14:24:09.411681 654880 start.go:563] Will wait 60s for crictl version
I1008 14:24:09.411754 654880 ssh_runner.go:195] Run: which crictl
I1008 14:24:09.415949 654880 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1008 14:24:09.443331 654880 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.28
RuntimeApiVersion: v1
I1008 14:24:09.443406 654880 ssh_runner.go:195] Run: containerd --version
I1008 14:24:09.469419 654880 ssh_runner.go:195] Run: containerd --version
I1008 14:24:09.496238 654880 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
I1008 14:24:09.497613 654880 out.go:179] - env NO_PROXY=192.168.67.2
I1008 14:24:09.498926 654880 cli_runner.go:164] Run: docker network inspect multinode-439307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1008 14:24:09.517143 654880 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I1008 14:24:09.521732 654880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1008 14:24:09.533155 654880 mustload.go:65] Loading cluster: multinode-439307
I1008 14:24:09.533379 654880 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1008 14:24:09.533664 654880 cli_runner.go:164] Run: docker container inspect multinode-439307 --format={{.State.Status}}
I1008 14:24:09.552397 654880 host.go:66] Checking if "multinode-439307" exists ...
I1008 14:24:09.552676 654880 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307 for IP: 192.168.67.3
I1008 14:24:09.552690 654880 certs.go:195] generating shared ca certs ...
I1008 14:24:09.552707 654880 certs.go:227] acquiring lock for ca certs: {Name:mk57aa9b2383fcc0908491da1ce926c707ff69a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1008 14:24:09.552825 654880 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key
I1008 14:24:09.552870 654880 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key
I1008 14:24:09.552884 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1008 14:24:09.552899 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1008 14:24:09.552911 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1008 14:24:09.552921 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1008 14:24:09.553005 654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem (1338 bytes)
W1008 14:24:09.553040 654880 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787_empty.pem, impossibly tiny 0 bytes
I1008 14:24:09.553048 654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca-key.pem (1675 bytes)
I1008 14:24:09.553076 654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/ca.pem (1078 bytes)
I1008 14:24:09.553109 654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/cert.pem (1123 bytes)
I1008 14:24:09.553130 654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/key.pem (1675 bytes)
I1008 14:24:09.553168 654880 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem (1708 bytes)
I1008 14:24:09.553193 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem -> /usr/share/ca-certificates/516787.pem
I1008 14:24:09.553207 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem -> /usr/share/ca-certificates/5167872.pem
I1008 14:24:09.553222 654880 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1008 14:24:09.553242 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1008 14:24:09.573504 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1008 14:24:09.592232 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1008 14:24:09.610884 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1008 14:24:09.630003 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/certs/516787.pem --> /usr/share/ca-certificates/516787.pem (1338 bytes)
I1008 14:24:09.653800 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/files/etc/ssl/certs/5167872.pem --> /usr/share/ca-certificates/5167872.pem (1708 bytes)
I1008 14:24:09.675803 654880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1008 14:24:09.695568 654880 ssh_runner.go:195] Run: openssl version
I1008 14:24:09.702733 654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516787.pem && ln -fs /usr/share/ca-certificates/516787.pem /etc/ssl/certs/516787.pem"
I1008 14:24:09.712131 654880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516787.pem
I1008 14:24:09.716287 654880 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 8 14:09 /usr/share/ca-certificates/516787.pem
I1008 14:24:09.716357 654880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516787.pem
I1008 14:24:09.752537 654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516787.pem /etc/ssl/certs/51391683.0"
I1008 14:24:09.762173 654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5167872.pem && ln -fs /usr/share/ca-certificates/5167872.pem /etc/ssl/certs/5167872.pem"
I1008 14:24:09.772303 654880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5167872.pem
I1008 14:24:09.776649 654880 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 8 14:09 /usr/share/ca-certificates/5167872.pem
I1008 14:24:09.776712 654880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5167872.pem
I1008 14:24:09.812619 654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5167872.pem /etc/ssl/certs/3ec20f2e.0"
I1008 14:24:09.823098 654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1008 14:24:09.832190 654880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1008 14:24:09.836566 654880 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 8 14:03 /usr/share/ca-certificates/minikubeCA.pem
I1008 14:24:09.836631 654880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1008 14:24:09.871385 654880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1008 14:24:09.881326 654880 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1008 14:24:09.885609 654880 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1008 14:24:09.885678 654880 kubeadm.go:934] updating node {m02 192.168.67.3 8443 v1.34.1 containerd false true} ...
I1008 14:24:09.885785 654880 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-439307-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.3
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1008 14:24:09.885854 654880 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1008 14:24:09.894180 654880 binaries.go:44] Found k8s binaries, skipping transfer
I1008 14:24:09.894257 654880 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I1008 14:24:09.902472 654880 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
I1008 14:24:09.916134 654880 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1008 14:24:09.931662 654880 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I1008 14:24:09.935628 654880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1008 14:24:09.946151 654880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1008 14:24:10.025257 654880 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1008 14:24:10.052607 654880 host.go:66] Checking if "multinode-439307" exists ...
I1008 14:24:10.052868 654880 start.go:317] joinCluster: &{Name:multinode-439307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:multinode-439307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSi
ze:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1008 14:24:10.052965 654880 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm token create --print-join-command --ttl=0"
I1008 14:24:10.053040 654880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-439307
I1008 14:24:10.072940 654880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/21681-513010/.minikube/machines/multinode-439307/id_rsa Username:docker}
I1008 14:24:10.226647 654880 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:false Worker:true}
I1008 14:24:10.226740 654880 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4ut921.623axv37vw0z44c2 --discovery-token-ca-cert-hash sha256:29d08006b7495c78b5f27aaa9701a82f373226e18456de05d156e89bccfbd06f --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=multinode-439307-m02"
I1008 14:24:11.499926 654880 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4ut921.623axv37vw0z44c2 --discovery-token-ca-cert-hash sha256:29d08006b7495c78b5f27aaa9701a82f373226e18456de05d156e89bccfbd06f --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=multinode-439307-m02": (1.273161843s)
I1008 14:24:11.500025 654880 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
I1008 14:24:11.684824 654880 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-439307-m02 minikube.k8s.io/updated_at=2025_10_08T14_24_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=multinode-439307 minikube.k8s.io/primary=false
I1008 14:24:11.757264 654880 start.go:319] duration metric: took 1.704388689s to joinCluster
I1008 14:24:11.757362 654880 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:false Worker:true}
I1008 14:24:11.757686 654880 config.go:182] Loaded profile config "multinode-439307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1008 14:24:11.759939 654880 out.go:179] * Verifying Kubernetes components...
I1008 14:24:11.761383 654880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1008 14:24:11.853236 654880 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1008 14:24:11.868476 654880 kapi.go:59] client config for multinode-439307: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key", CAFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1008 14:24:11.868890 654880 node_ready.go:35] waiting up to 6m0s for node "multinode-439307-m02" to be "Ready" ...
W1008 14:24:13.872620 654880 node_ready.go:57] node "multinode-439307-m02" has "Ready":"False" status (will retry)
W1008 14:24:16.372273 654880 node_ready.go:57] node "multinode-439307-m02" has "Ready":"False" status (will retry)
W1008 14:24:18.372477 654880 node_ready.go:57] node "multinode-439307-m02" has "Ready":"False" status (will retry)
W1008 14:24:20.372540 654880 node_ready.go:57] node "multinode-439307-m02" has "Ready":"False" status (will retry)
W1008 14:24:22.872160 654880 node_ready.go:57] node "multinode-439307-m02" has "Ready":"False" status (will retry)
I1008 14:24:24.371830 654880 node_ready.go:49] node "multinode-439307-m02" is "Ready"
I1008 14:24:24.371861 654880 node_ready.go:38] duration metric: took 12.502945701s for node "multinode-439307-m02" to be "Ready" ...
I1008 14:24:24.371877 654880 system_svc.go:44] waiting for kubelet service to be running ....
I1008 14:24:24.371923 654880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1008 14:24:24.385754 654880 system_svc.go:56] duration metric: took 13.866509ms WaitForService to wait for kubelet
I1008 14:24:24.385788 654880 kubeadm.go:586] duration metric: took 12.628395274s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1008 14:24:24.385819 654880 node_conditions.go:102] verifying NodePressure condition ...
I1008 14:24:24.388606 654880 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1008 14:24:24.388634 654880 node_conditions.go:123] node cpu capacity is 8
I1008 14:24:24.388647 654880 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1008 14:24:24.388663 654880 node_conditions.go:123] node cpu capacity is 8
I1008 14:24:24.388668 654880 node_conditions.go:105] duration metric: took 2.843574ms to run NodePressure ...
I1008 14:24:24.388679 654880 start.go:241] waiting for startup goroutines ...
I1008 14:24:24.388715 654880 start.go:255] writing updated cluster config ...
I1008 14:24:24.389017 654880 ssh_runner.go:195] Run: rm -f paused
I1008 14:24:24.393052 654880 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1008 14:24:24.393669 654880 kapi.go:59] client config for multinode-439307: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/profiles/multinode-439307/client.key", CAFile:"/home/jenkins/minikube-integration/21681-513010/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1008 14:24:24.396852 654880 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-llvkc" in "kube-system" namespace to be "Ready" or be gone ...
I1008 14:24:24.401377 654880 pod_ready.go:94] pod "coredns-66bc5c9577-llvkc" is "Ready"
I1008 14:24:24.401408 654880 pod_ready.go:86] duration metric: took 4.533488ms for pod "coredns-66bc5c9577-llvkc" in "kube-system" namespace to be "Ready" or be gone ...
I1008 14:24:24.403808 654880 pod_ready.go:83] waiting for pod "etcd-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
I1008 14:24:24.407791 654880 pod_ready.go:94] pod "etcd-multinode-439307" is "Ready"
I1008 14:24:24.407814 654880 pod_ready.go:86] duration metric: took 3.984727ms for pod "etcd-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
I1008 14:24:24.410014 654880 pod_ready.go:83] waiting for pod "kube-apiserver-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
I1008 14:24:24.414225 654880 pod_ready.go:94] pod "kube-apiserver-multinode-439307" is "Ready"
I1008 14:24:24.414249 654880 pod_ready.go:86] duration metric: took 4.210762ms for pod "kube-apiserver-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
I1008 14:24:24.416187 654880 pod_ready.go:83] waiting for pod "kube-controller-manager-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
I1008 14:24:24.594705 654880 request.go:683] "Waited before sending request" delay="178.359169ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-439307"
I1008 14:24:24.795096 654880 request.go:683] "Waited before sending request" delay="197.360136ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/nodes/multinode-439307"
I1008 14:24:24.797827 654880 pod_ready.go:94] pod "kube-controller-manager-multinode-439307" is "Ready"
I1008 14:24:24.797865 654880 pod_ready.go:86] duration metric: took 381.656304ms for pod "kube-controller-manager-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
I1008 14:24:24.994392 654880 request.go:683] "Waited before sending request" delay="196.347363ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
I1008 14:24:24.998079 654880 pod_ready.go:83] waiting for pod "kube-proxy-djg8q" in "kube-system" namespace to be "Ready" or be gone ...
I1008 14:24:25.194583 654880 request.go:683] "Waited before sending request" delay="196.367193ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-djg8q"
I1008 14:24:25.395013 654880 request.go:683] "Waited before sending request" delay="197.398426ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/nodes/multinode-439307-m02"
I1008 14:24:25.397572 654880 pod_ready.go:94] pod "kube-proxy-djg8q" is "Ready"
I1008 14:24:25.397604 654880 pod_ready.go:86] duration metric: took 399.496213ms for pod "kube-proxy-djg8q" in "kube-system" namespace to be "Ready" or be gone ...
I1008 14:24:25.397618 654880 pod_ready.go:83] waiting for pod "kube-proxy-sjzfx" in "kube-system" namespace to be "Ready" or be gone ...
I1008 14:24:25.595137 654880 request.go:683] "Waited before sending request" delay="197.409064ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sjzfx"
I1008 14:24:25.794319 654880 request.go:683] "Waited before sending request" delay="196.312301ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/nodes/multinode-439307"
I1008 14:24:25.797345 654880 pod_ready.go:94] pod "kube-proxy-sjzfx" is "Ready"
I1008 14:24:25.797374 654880 pod_ready.go:86] duration metric: took 399.749677ms for pod "kube-proxy-sjzfx" in "kube-system" namespace to be "Ready" or be gone ...
I1008 14:24:25.994958 654880 request.go:683] "Waited before sending request" delay="197.435121ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
I1008 14:24:25.997593 654880 pod_ready.go:83] waiting for pod "kube-scheduler-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
I1008 14:24:26.195068 654880 request.go:683] "Waited before sending request" delay="197.36444ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-439307"
I1008 14:24:26.395200 654880 request.go:683] "Waited before sending request" delay="197.229852ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.67.2:8443/api/v1/nodes/multinode-439307"
I1008 14:24:26.397809 654880 pod_ready.go:94] pod "kube-scheduler-multinode-439307" is "Ready"
I1008 14:24:26.397834 654880 pod_ready.go:86] duration metric: took 400.216835ms for pod "kube-scheduler-multinode-439307" in "kube-system" namespace to be "Ready" or be gone ...
I1008 14:24:26.397846 654880 pod_ready.go:40] duration metric: took 2.004759901s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1008 14:24:26.444090 654880 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
I1008 14:24:26.446553 654880 out.go:179] * Done! kubectl is now configured to use "multinode-439307" cluster and "default" namespace by default
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
3c70355249fcd 8c811b4aec35f 13 seconds ago Running busybox 0 f6bf249387eaa busybox-7b57f96db7-n6rvn default
4ea1a37f26c9f 52546a367cc9e 44 seconds ago Running coredns 0 d809f9cba67fd coredns-66bc5c9577-llvkc kube-system
1ab8655881512 6e38f40d628db 44 seconds ago Running storage-provisioner 0 e2da4323cdf8d storage-provisioner kube-system
eb44427aa7b68 409467f978b4a 55 seconds ago Running kindnet-cni 0 470cdd7a7920c kindnet-l6pqj kube-system
70d5305f9c0f1 fc25172553d79 55 seconds ago Running kube-proxy 0 734361aeebab7 kube-proxy-sjzfx kube-system
c5ef7b607ae59 5f1f5298c888d About a minute ago Running etcd 0 627ec39143d66 etcd-multinode-439307 kube-system
7bc5378271f6e c80c8dbafe7dd About a minute ago Running kube-controller-manager 0 887929a790edf kube-controller-manager-multinode-439307 kube-system
4023d943508d7 7dd6aaa1717ab About a minute ago Running kube-scheduler 0 9fb7378888c6d kube-scheduler-multinode-439307 kube-system
a75297140a138 c3994bc696102 About a minute ago Running kube-apiserver 0 db9ed929a6258 kube-apiserver-multinode-439307 kube-system
==> containerd <==
Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.109269861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-llvkc,Uid:a445b5ef-8d30-4b7c-a40f-77f2a9072e7f,Namespace:kube-system,Attempt:0,}"
Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.111413366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:e1d410c3-de2a-4e2a-88c1-93970ce8b254,Namespace:kube-system,Attempt:0,}"
Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.205307729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:e1d410c3-de2a-4e2a-88c1-93970ce8b254,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2da4323cdf8d7d7b3931ff8c336a482dc0cc57329950586094267711d1b74ae\""
Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.210861374Z" level=info msg="CreateContainer within sandbox \"e2da4323cdf8d7d7b3931ff8c336a482dc0cc57329950586094267711d1b74ae\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.212899403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-llvkc,Uid:a445b5ef-8d30-4b7c-a40f-77f2a9072e7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d809f9cba67fd85761b8285b149afbb37772e4b710c2445e0b7d5cf977684afa\""
Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.217454502Z" level=info msg="CreateContainer within sandbox \"d809f9cba67fd85761b8285b149afbb37772e4b710c2445e0b7d5cf977684afa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.224178373Z" level=info msg="CreateContainer within sandbox \"e2da4323cdf8d7d7b3931ff8c336a482dc0cc57329950586094267711d1b74ae\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"1ab8655881512f6c4b619c636eee1de03f57f734cce6fdc4604bae23d671ab17\""
Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.224770434Z" level=info msg="StartContainer for \"1ab8655881512f6c4b619c636eee1de03f57f734cce6fdc4604bae23d671ab17\""
Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.229804859Z" level=info msg="CreateContainer within sandbox \"d809f9cba67fd85761b8285b149afbb37772e4b710c2445e0b7d5cf977684afa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ea1a37f26c9f5494351b59d206a47409262ce838a0524bce03e8da1debb8116\""
Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.230405753Z" level=info msg="StartContainer for \"4ea1a37f26c9f5494351b59d206a47409262ce838a0524bce03e8da1debb8116\""
Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.288088455Z" level=info msg="StartContainer for \"1ab8655881512f6c4b619c636eee1de03f57f734cce6fdc4604bae23d671ab17\" returns successfully"
Oct 08 14:23:58 multinode-439307 containerd[665]: time="2025-10-08T14:23:58.302363146Z" level=info msg="StartContainer for \"4ea1a37f26c9f5494351b59d206a47409262ce838a0524bce03e8da1debb8116\" returns successfully"
Oct 08 14:24:27 multinode-439307 containerd[665]: time="2025-10-08T14:24:27.431929318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-n6rvn,Uid:48d40e87-f7eb-4886-84ea-0d1c344bcef4,Namespace:default,Attempt:0,}"
Oct 08 14:24:27 multinode-439307 containerd[665]: time="2025-10-08T14:24:27.524294263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7b57f96db7-n6rvn,Uid:48d40e87-f7eb-4886-84ea-0d1c344bcef4,Namespace:default,Attempt:0,} returns sandbox id \"f6bf249387eaaf48dfa1cfac0cb2eb3646b9e2075be5c9397d97b91ceb9f7c69\""
Oct 08 14:24:27 multinode-439307 containerd[665]: time="2025-10-08T14:24:27.526837991Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.786377714Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.787080125Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.788316089Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.790697278Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.791452472Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 1.264570757s"
Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.791498077Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.798301525Z" level=info msg="CreateContainer within sandbox \"f6bf249387eaaf48dfa1cfac0cb2eb3646b9e2075be5c9397d97b91ceb9f7c69\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.808156445Z" level=info msg="CreateContainer within sandbox \"f6bf249387eaaf48dfa1cfac0cb2eb3646b9e2075be5c9397d97b91ceb9f7c69\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"3c70355249fcd2e6ee6d118b75c6bc3546058b18b6aeb6dce0b1b702d096ac47\""
Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.809029634Z" level=info msg="StartContainer for \"3c70355249fcd2e6ee6d118b75c6bc3546058b18b6aeb6dce0b1b702d096ac47\""
Oct 08 14:24:28 multinode-439307 containerd[665]: time="2025-10-08T14:24:28.869302769Z" level=info msg="StartContainer for \"3c70355249fcd2e6ee6d118b75c6bc3546058b18b6aeb6dce0b1b702d096ac47\" returns successfully"
==> coredns [4ea1a37f26c9f5494351b59d206a47409262ce838a0524bce03e8da1debb8116] <==
[INFO] 10.244.1.2:46440 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125237s
[INFO] 10.244.0.3:32802 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000173381s
[INFO] 10.244.0.3:52099 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000116887s
[INFO] 10.244.0.3:55009 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158031s
[INFO] 10.244.0.3:52826 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015101s
[INFO] 10.244.0.3:36042 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00007137s
[INFO] 10.244.0.3:51029 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001339s
[INFO] 10.244.0.3:58795 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130735s
[INFO] 10.244.0.3:47967 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075412s
[INFO] 10.244.1.2:39882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00025259s
[INFO] 10.244.1.2:52814 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000218308s
[INFO] 10.244.1.2:37521 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000148655s
[INFO] 10.244.1.2:42486 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011547s
[INFO] 10.244.0.3:44143 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169188s
[INFO] 10.244.0.3:48380 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000235742s
[INFO] 10.244.0.3:43850 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000155536s
[INFO] 10.244.0.3:49494 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093677s
[INFO] 10.244.1.2:59241 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198155s
[INFO] 10.244.1.2:55245 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000162203s
[INFO] 10.244.1.2:33545 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000110828s
[INFO] 10.244.1.2:36918 - 5 "PTR IN 1.67.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139385s
[INFO] 10.244.0.3:59030 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160386s
[INFO] 10.244.0.3:44681 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000139946s
[INFO] 10.244.0.3:37620 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098444s
[INFO] 10.244.0.3:59659 - 5 "PTR IN 1.67.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000066524s
==> describe nodes <==
Name: multinode-439307
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-439307
kubernetes.io/os=linux
minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
minikube.k8s.io/name=multinode-439307
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_10_08T14_23_42_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 08 Oct 2025 14:23:38 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-439307
AcquireTime: <unset>
RenewTime: Wed, 08 Oct 2025 14:24:32 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 08 Oct 2025 14:23:57 +0000 Wed, 08 Oct 2025 14:23:37 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 08 Oct 2025 14:23:57 +0000 Wed, 08 Oct 2025 14:23:37 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 08 Oct 2025 14:23:57 +0000 Wed, 08 Oct 2025 14:23:37 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 08 Oct 2025 14:23:57 +0000 Wed, 08 Oct 2025 14:23:57 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.67.2
Hostname: multinode-439307
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863448Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863448Ki
pods: 110
System Info:
Machine ID: 56d3e6862fcc45b48f25bde7f561b1d7
System UUID: 3ecc1d83-e69e-4927-aebb-a9dcae9475e4
Boot ID: 5fdbec2a-e754-47ce-9745-1553567d6c31
Kernel Version: 6.8.0-1041-gcp
OS Image: Debian GNU/Linux 12 (bookworm)
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.28
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-7b57f96db7-n6rvn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15s
kube-system coredns-66bc5c9577-llvkc 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 56s
kube-system etcd-multinode-439307 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 61s
kube-system kindnet-l6pqj 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 56s
kube-system kube-apiserver-multinode-439307 250m (3%) 0 (0%) 0 (0%) 0 (0%) 61s
kube-system kube-controller-manager-multinode-439307 200m (2%) 0 (0%) 0 (0%) 0 (0%) 61s
kube-system kube-proxy-sjzfx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 56s
kube-system kube-scheduler-multinode-439307 100m (1%) 0 (0%) 0 (0%) 0 (0%) 61s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 55s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%) 100m (1%)
memory 220Mi (0%) 220Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 55s kube-proxy
Normal Starting 66s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 66s (x8 over 66s) kubelet Node multinode-439307 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 66s (x8 over 66s) kubelet Node multinode-439307 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 66s (x7 over 66s) kubelet Node multinode-439307 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 66s kubelet Updated Node Allocatable limit across pods
Normal Starting 61s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 61s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 61s kubelet Node multinode-439307 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 61s kubelet Node multinode-439307 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 61s kubelet Node multinode-439307 status is now: NodeHasSufficientPID
Normal RegisteredNode 57s node-controller Node multinode-439307 event: Registered Node multinode-439307 in Controller
Normal NodeReady 45s kubelet Node multinode-439307 status is now: NodeReady
Name: multinode-439307-m02
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-439307-m02
kubernetes.io/os=linux
minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
minikube.k8s.io/name=multinode-439307
minikube.k8s.io/primary=false
minikube.k8s.io/updated_at=2025_10_08T14_24_11_0700
minikube.k8s.io/version=v1.37.0
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 08 Oct 2025 14:24:11 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-439307-m02
AcquireTime: <unset>
RenewTime: Wed, 08 Oct 2025 14:24:42 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 08 Oct 2025 14:24:41 +0000 Wed, 08 Oct 2025 14:24:11 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 08 Oct 2025 14:24:41 +0000 Wed, 08 Oct 2025 14:24:11 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 08 Oct 2025 14:24:41 +0000 Wed, 08 Oct 2025 14:24:11 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 08 Oct 2025 14:24:41 +0000 Wed, 08 Oct 2025 14:24:24 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.67.3
Hostname: multinode-439307-m02
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863448Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863448Ki
pods: 110
System Info:
Machine ID: 8b74ec156e614a3fac7c415130ea0397
System UUID: ab0bc412-83f7-4153-b57d-32510d60dd56
Boot ID: 5fdbec2a-e754-47ce-9745-1553567d6c31
Kernel Version: 6.8.0-1041-gcp
OS Image: Debian GNU/Linux 12 (bookworm)
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.28
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-7b57f96db7-9qspn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15s
kube-system kindnet-wch5j 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 31s
kube-system kube-proxy-djg8q 0 (0%) 0 (0%) 0 (0%) 0 (0%) 31s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (1%) 100m (1%)
memory 50Mi (0%) 50Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 28s kube-proxy
Normal NodeHasSufficientMemory 31s (x3 over 31s) kubelet Node multinode-439307-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 31s (x3 over 31s) kubelet Node multinode-439307-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 31s (x3 over 31s) kubelet Node multinode-439307-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 31s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 27s node-controller Node multinode-439307-m02 event: Registered Node multinode-439307-m02 in Controller
Normal NodeReady 18s kubelet Node multinode-439307-m02 status is now: NodeReady
Name: multinode-439307-m03
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-439307-m03
kubernetes.io/os=linux
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 08 Oct 2025 14:24:41 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease: Failed to get lease: leases.coordination.k8s.io "multinode-439307-m03" not found
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 08 Oct 2025 14:24:41 +0000 Wed, 08 Oct 2025 14:24:41 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 08 Oct 2025 14:24:41 +0000 Wed, 08 Oct 2025 14:24:41 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 08 Oct 2025 14:24:41 +0000 Wed, 08 Oct 2025 14:24:41 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Wed, 08 Oct 2025 14:24:41 +0000 Wed, 08 Oct 2025 14:24:41 +0000 KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, CSINode is not yet initialized]
Addresses:
InternalIP: 192.168.67.4
Hostname: multinode-439307-m03
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863448Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863448Ki
pods: 110
System Info:
Machine ID: cb5019628fa5415a9a6de65b61b0aa10
System UUID: 4c1a693e-f511-45ca-9c03-2a547007f3cb
Boot ID: 5fdbec2a-e754-47ce-9745-1553567d6c31
Kernel Version: 6.8.0-1041-gcp
OS Image: Debian GNU/Linux 12 (bookworm)
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.28
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.2.0/24
PodCIDRs: 10.244.2.0/24
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system kindnet-58vm5 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 1s
kube-system kube-proxy-fs89g 0 (0%) 0 (0%) 0 (0%) 0 (0%) 1s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (1%) 100m (1%)
memory 50Mi (0%) 50Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 1s (x3 over 1s) kubelet Node multinode-439307-m03 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 1s (x3 over 1s) kubelet Node multinode-439307-m03 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 1s (x3 over 1s) kubelet Node multinode-439307-m03 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 1s kubelet Updated Node Allocatable limit across pods
==> dmesg <==
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 26 b3 37 bf 19 08 06
[ +0.000410] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 1c 28 4b 91 c9 08 06
[Oct 8 13:59] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 16 3f fe bd b6 08 06
[ +0.044604] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff ea 40 7d d0 6d a6 08 06
[ +10.339808] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff f2 86 26 6c 97 dc 08 06
[ +2.975774] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff 2a 61 e9 d6 10 e3 08 06
[ +0.101555] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff ea fa 29 51 08 ac 08 06
[ +30.965246] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 37 46 57 22 c1 08 06
[ +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff ea 40 7d d0 6d a6 08 06
[Oct 8 14:00] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 9c 9c 72 fb 11 08 06
[ +0.000628] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff ea fa 29 51 08 ac 08 06
[ +2.730130] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 a4 4e 39 b9 db 08 06
[ +0.000456] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 86 26 6c 97 dc 08 06
==> etcd [c5ef7b607ae59f8f6aeebf4ab11b5560d14e184780133f6a6973d2dc59d69c2c] <==
{"level":"warn","ts":"2025-10-08T14:23:38.147870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55198","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.154263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55206","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.163022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55216","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.169346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55260","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.175903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55274","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.182506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55280","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.188820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55292","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.195857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55314","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.202025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55332","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.208239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55358","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.221089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55382","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.228181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55400","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.235952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55416","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.249350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55446","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.257305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55462","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.263748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55474","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.269837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55488","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.276382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55500","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.282566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55518","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.288832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55528","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.302605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55540","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.308859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55566","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.315043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55586","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:23:38.361302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55594","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-08T14:24:35.554339Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.090033ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289968003519192253 > lease_revoke:<id:1fc799c434c59c06>","response":"size:29"}
==> kernel <==
14:24:42 up 2:07, 0 user, load average: 1.10, 1.50, 1.84
Linux multinode-439307 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kindnet [eb44427aa7b68d0cb5246a5d10b69e69a310ad7dbe803f32fbfe929362b00e9b] <==
time="2025-10-08T14:23:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
I1008 14:23:47.690096 1 controller.go:377] "Starting controller" name="kube-network-policies"
I1008 14:23:47.690125 1 controller.go:381] "Waiting for informer caches to sync"
I1008 14:23:47.690138 1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
I1008 14:23:47.690279 1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
I1008 14:23:48.090594 1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
I1008 14:23:48.090626 1 metrics.go:72] Registering metrics
I1008 14:23:48.090682 1 controller.go:711] "Syncing nftables rules"
I1008 14:23:57.691180 1 main.go:297] Handling node with IPs: map[192.168.67.2:{}]
I1008 14:23:57.691253 1 main.go:301] handling current node
I1008 14:24:07.697046 1 main.go:297] Handling node with IPs: map[192.168.67.2:{}]
I1008 14:24:07.697089 1 main.go:301] handling current node
I1008 14:24:17.690952 1 main.go:297] Handling node with IPs: map[192.168.67.3:{}]
I1008 14:24:17.691012 1 main.go:324] Node multinode-439307-m02 has CIDR [10.244.1.0/24]
I1008 14:24:17.691311 1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.67.3 Flags: [] Table: 0 Realm: 0}
I1008 14:24:17.691488 1 main.go:297] Handling node with IPs: map[192.168.67.2:{}]
I1008 14:24:17.691506 1 main.go:301] handling current node
I1008 14:24:27.690151 1 main.go:297] Handling node with IPs: map[192.168.67.2:{}]
I1008 14:24:27.690209 1 main.go:301] handling current node
I1008 14:24:27.690224 1 main.go:297] Handling node with IPs: map[192.168.67.3:{}]
I1008 14:24:27.690228 1 main.go:324] Node multinode-439307-m02 has CIDR [10.244.1.0/24]
I1008 14:24:37.696064 1 main.go:297] Handling node with IPs: map[192.168.67.2:{}]
I1008 14:24:37.696102 1 main.go:301] handling current node
I1008 14:24:37.696118 1 main.go:297] Handling node with IPs: map[192.168.67.3:{}]
I1008 14:24:37.696123 1 main.go:324] Node multinode-439307-m02 has CIDR [10.244.1.0/24]
==> kube-apiserver [a75297140a13849f0bbb8691fcb7ec90b635a193300494f88d6ee8bb6961ae9a] <==
I1008 14:23:39.722440 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I1008 14:23:40.200145 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1008 14:23:40.237907 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1008 14:23:40.325514 1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
W1008 14:23:40.331916 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
I1008 14:23:40.333100 1 controller.go:667] quota admission added evaluator for: endpoints
I1008 14:23:40.337576 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1008 14:23:40.736405 1 controller.go:667] quota admission added evaluator for: serviceaccounts
I1008 14:23:41.412400 1 controller.go:667] quota admission added evaluator for: deployments.apps
I1008 14:23:41.423756 1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
I1008 14:23:41.431519 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I1008 14:23:46.190246 1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
I1008 14:23:46.194096 1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
I1008 14:23:46.390973 1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
I1008 14:23:46.839639 1 controller.go:667] quota admission added evaluator for: replicasets.apps
E1008 14:24:29.836739 1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60732: use of closed network connection
E1008 14:24:30.001569 1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60754: use of closed network connection
E1008 14:24:30.207099 1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60770: use of closed network connection
E1008 14:24:30.374520 1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60784: use of closed network connection
E1008 14:24:30.535911 1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60810: use of closed network connection
E1008 14:24:30.697115 1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60834: use of closed network connection
E1008 14:24:30.974454 1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60862: use of closed network connection
E1008 14:24:31.133503 1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60876: use of closed network connection
E1008 14:24:31.290639 1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60900: use of closed network connection
E1008 14:24:31.447827 1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:60924: use of closed network connection
==> kube-controller-manager [7bc5378271f6ec3084def02b6c09453b95f33b6c40f004a8ecd7ddaca4ee2e23] <==
I1008 14:23:45.735304 1 shared_informer.go:356] "Caches are synced" controller="PV protection"
I1008 14:23:45.736140 1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
I1008 14:23:45.736180 1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
I1008 14:23:45.736201 1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
I1008 14:23:45.736257 1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
I1008 14:23:45.736247 1 shared_informer.go:356] "Caches are synced" controller="endpoint"
I1008 14:23:45.736431 1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
I1008 14:23:45.736317 1 shared_informer.go:356] "Caches are synced" controller="cronjob"
I1008 14:23:45.736499 1 shared_informer.go:356] "Caches are synced" controller="attach detach"
I1008 14:23:45.736731 1 shared_informer.go:356] "Caches are synced" controller="job"
I1008 14:23:45.740043 1 shared_informer.go:356] "Caches are synced" controller="namespace"
I1008 14:23:45.740066 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
I1008 14:23:45.742483 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1008 14:23:45.745803 1 shared_informer.go:356] "Caches are synced" controller="GC"
I1008 14:23:45.752135 1 shared_informer.go:356] "Caches are synced" controller="deployment"
I1008 14:23:45.757502 1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
I1008 14:23:45.762890 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1008 14:24:00.736958 1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
I1008 14:24:11.255158 1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-439307-m02\" does not exist"
I1008 14:24:11.267044 1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-439307-m02" podCIDRs=["10.244.1.0/24"]
I1008 14:24:15.739150 1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-439307-m02"
I1008 14:24:24.240565 1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-439307-m02"
I1008 14:24:41.773017 1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-439307-m02"
I1008 14:24:41.773446 1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-439307-m03\" does not exist"
I1008 14:24:41.785959 1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-439307-m03" podCIDRs=["10.244.2.0/24"]
==> kube-proxy [70d5305f9c0f1e614d86457efd99bfbb2a639a470f299474edd5bdee53d17425] <==
I1008 14:23:46.942146 1 server_linux.go:53] "Using iptables proxy"
I1008 14:23:47.054382 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1008 14:23:47.154807 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1008 14:23:47.154859 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.67.2"]
E1008 14:23:47.154951 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1008 14:23:47.180008 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1008 14:23:47.180073 1 server_linux.go:132] "Using iptables Proxier"
I1008 14:23:47.186411 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1008 14:23:47.187151 1 server.go:527] "Version info" version="v1.34.1"
I1008 14:23:47.187189 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1008 14:23:47.189577 1 config.go:200] "Starting service config controller"
I1008 14:23:47.189598 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1008 14:23:47.189627 1 config.go:106] "Starting endpoint slice config controller"
I1008 14:23:47.189632 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1008 14:23:47.189645 1 config.go:403] "Starting serviceCIDR config controller"
I1008 14:23:47.189650 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1008 14:23:47.189881 1 config.go:309] "Starting node config controller"
I1008 14:23:47.189888 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1008 14:23:47.189894 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1008 14:23:47.290449 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1008 14:23:47.290467 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1008 14:23:47.290469 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
==> kube-scheduler [4023d943508d78a5c887a79feaa82148d136b6c293acc44418506ac640d4c238] <==
E1008 14:23:38.760223 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1008 14:23:38.760331 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1008 14:23:38.760371 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1008 14:23:38.760376 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1008 14:23:38.760448 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1008 14:23:38.760458 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1008 14:23:38.760452 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1008 14:23:38.760534 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1008 14:23:38.760564 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1008 14:23:38.760602 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1008 14:23:38.760684 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1008 14:23:38.760689 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1008 14:23:38.760727 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1008 14:23:38.760774 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1008 14:23:38.760787 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1008 14:23:39.582212 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1008 14:23:39.649725 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1008 14:23:39.743595 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1008 14:23:39.755043 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1008 14:23:39.833591 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1008 14:23:39.896154 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1008 14:23:39.927234 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1008 14:23:39.948345 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1008 14:23:39.962957 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
I1008 14:23:41.659308 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Oct 08 14:23:42 multinode-439307 kubelet[1486]: I1008 14:23:42.315174 1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-439307" podStartSLOduration=1.3151361129999999 podStartE2EDuration="1.315136113s" podCreationTimestamp="2025-10-08 14:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:42.304935753 +0000 UTC m=+1.126741170" watchObservedRunningTime="2025-10-08 14:23:42.315136113 +0000 UTC m=+1.136941525"
Oct 08 14:23:42 multinode-439307 kubelet[1486]: I1008 14:23:42.326049 1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-439307" podStartSLOduration=1.3260291149999999 podStartE2EDuration="1.326029115s" podCreationTimestamp="2025-10-08 14:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:42.315323385 +0000 UTC m=+1.137128872" watchObservedRunningTime="2025-10-08 14:23:42.326029115 +0000 UTC m=+1.147834531"
Oct 08 14:23:42 multinode-439307 kubelet[1486]: I1008 14:23:42.326174 1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-multinode-439307" podStartSLOduration=1.326165456 podStartE2EDuration="1.326165456s" podCreationTimestamp="2025-10-08 14:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:42.325917199 +0000 UTC m=+1.147722617" watchObservedRunningTime="2025-10-08 14:23:42.326165456 +0000 UTC m=+1.147970871"
Oct 08 14:23:42 multinode-439307 kubelet[1486]: I1008 14:23:42.352482 1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-multinode-439307" podStartSLOduration=1.352459294 podStartE2EDuration="1.352459294s" podCreationTimestamp="2025-10-08 14:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:42.33893649 +0000 UTC m=+1.160741907" watchObservedRunningTime="2025-10-08 14:23:42.352459294 +0000 UTC m=+1.174264711"
Oct 08 14:23:45 multinode-439307 kubelet[1486]: I1008 14:23:45.703138 1486 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Oct 08 14:23:45 multinode-439307 kubelet[1486]: I1008 14:23:45.703898 1486 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481639 1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fea0f284-17d4-438c-91a6-14831ce6ce5c-lib-modules\") pod \"kindnet-l6pqj\" (UID: \"fea0f284-17d4-438c-91a6-14831ce6ce5c\") " pod="kube-system/kindnet-l6pqj"
Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481688 1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jstr\" (UniqueName: \"kubernetes.io/projected/1211872c-1472-435c-a117-2656ba2fca8e-kube-api-access-6jstr\") pod \"kube-proxy-sjzfx\" (UID: \"1211872c-1472-435c-a117-2656ba2fca8e\") " pod="kube-system/kube-proxy-sjzfx"
Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481713 1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fea0f284-17d4-438c-91a6-14831ce6ce5c-cni-cfg\") pod \"kindnet-l6pqj\" (UID: \"fea0f284-17d4-438c-91a6-14831ce6ce5c\") " pod="kube-system/kindnet-l6pqj"
Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481727 1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1211872c-1472-435c-a117-2656ba2fca8e-xtables-lock\") pod \"kube-proxy-sjzfx\" (UID: \"1211872c-1472-435c-a117-2656ba2fca8e\") " pod="kube-system/kube-proxy-sjzfx"
Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481745 1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fea0f284-17d4-438c-91a6-14831ce6ce5c-xtables-lock\") pod \"kindnet-l6pqj\" (UID: \"fea0f284-17d4-438c-91a6-14831ce6ce5c\") " pod="kube-system/kindnet-l6pqj"
Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481763 1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1211872c-1472-435c-a117-2656ba2fca8e-lib-modules\") pod \"kube-proxy-sjzfx\" (UID: \"1211872c-1472-435c-a117-2656ba2fca8e\") " pod="kube-system/kube-proxy-sjzfx"
Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481786 1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb5rk\" (UniqueName: \"kubernetes.io/projected/fea0f284-17d4-438c-91a6-14831ce6ce5c-kube-api-access-nb5rk\") pod \"kindnet-l6pqj\" (UID: \"fea0f284-17d4-438c-91a6-14831ce6ce5c\") " pod="kube-system/kindnet-l6pqj"
Oct 08 14:23:46 multinode-439307 kubelet[1486]: I1008 14:23:46.481806 1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1211872c-1472-435c-a117-2656ba2fca8e-kube-proxy\") pod \"kube-proxy-sjzfx\" (UID: \"1211872c-1472-435c-a117-2656ba2fca8e\") " pod="kube-system/kube-proxy-sjzfx"
Oct 08 14:23:47 multinode-439307 kubelet[1486]: I1008 14:23:47.299755 1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sjzfx" podStartSLOduration=1.299713567 podStartE2EDuration="1.299713567s" podCreationTimestamp="2025-10-08 14:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:47.299676523 +0000 UTC m=+6.121481941" watchObservedRunningTime="2025-10-08 14:23:47.299713567 +0000 UTC m=+6.121518985"
Oct 08 14:23:48 multinode-439307 kubelet[1486]: I1008 14:23:48.313744 1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-l6pqj" podStartSLOduration=2.313719899 podStartE2EDuration="2.313719899s" podCreationTimestamp="2025-10-08 14:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:48.313549742 +0000 UTC m=+7.135355171" watchObservedRunningTime="2025-10-08 14:23:48.313719899 +0000 UTC m=+7.135525315"
Oct 08 14:23:57 multinode-439307 kubelet[1486]: I1008 14:23:57.772604 1486 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
Oct 08 14:23:57 multinode-439307 kubelet[1486]: I1008 14:23:57.853219 1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw6pb\" (UniqueName: \"kubernetes.io/projected/a445b5ef-8d30-4b7c-a40f-77f2a9072e7f-kube-api-access-rw6pb\") pod \"coredns-66bc5c9577-llvkc\" (UID: \"a445b5ef-8d30-4b7c-a40f-77f2a9072e7f\") " pod="kube-system/coredns-66bc5c9577-llvkc"
Oct 08 14:23:57 multinode-439307 kubelet[1486]: I1008 14:23:57.853273 1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e1d410c3-de2a-4e2a-88c1-93970ce8b254-tmp\") pod \"storage-provisioner\" (UID: \"e1d410c3-de2a-4e2a-88c1-93970ce8b254\") " pod="kube-system/storage-provisioner"
Oct 08 14:23:57 multinode-439307 kubelet[1486]: I1008 14:23:57.853308 1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlb24\" (UniqueName: \"kubernetes.io/projected/e1d410c3-de2a-4e2a-88c1-93970ce8b254-kube-api-access-nlb24\") pod \"storage-provisioner\" (UID: \"e1d410c3-de2a-4e2a-88c1-93970ce8b254\") " pod="kube-system/storage-provisioner"
Oct 08 14:23:57 multinode-439307 kubelet[1486]: I1008 14:23:57.853418 1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a445b5ef-8d30-4b7c-a40f-77f2a9072e7f-config-volume\") pod \"coredns-66bc5c9577-llvkc\" (UID: \"a445b5ef-8d30-4b7c-a40f-77f2a9072e7f\") " pod="kube-system/coredns-66bc5c9577-llvkc"
Oct 08 14:23:58 multinode-439307 kubelet[1486]: I1008 14:23:58.331131 1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.331114913 podStartE2EDuration="11.331114913s" podCreationTimestamp="2025-10-08 14:23:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:58.330824691 +0000 UTC m=+17.152630110" watchObservedRunningTime="2025-10-08 14:23:58.331114913 +0000 UTC m=+17.152920351"
Oct 08 14:23:58 multinode-439307 kubelet[1486]: I1008 14:23:58.344349 1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-llvkc" podStartSLOduration=12.344324469 podStartE2EDuration="12.344324469s" podCreationTimestamp="2025-10-08 14:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 14:23:58.34404488 +0000 UTC m=+17.165850298" watchObservedRunningTime="2025-10-08 14:23:58.344324469 +0000 UTC m=+17.166129896"
Oct 08 14:24:27 multinode-439307 kubelet[1486]: I1008 14:24:27.247091 1486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g9nr\" (UniqueName: \"kubernetes.io/projected/48d40e87-f7eb-4886-84ea-0d1c344bcef4-kube-api-access-9g9nr\") pod \"busybox-7b57f96db7-n6rvn\" (UID: \"48d40e87-f7eb-4886-84ea-0d1c344bcef4\") " pod="default/busybox-7b57f96db7-n6rvn"
Oct 08 14:24:29 multinode-439307 kubelet[1486]: I1008 14:24:29.399108 1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7b57f96db7-n6rvn" podStartSLOduration=1.132795141 podStartE2EDuration="2.399085602s" podCreationTimestamp="2025-10-08 14:24:27 +0000 UTC" firstStartedPulling="2025-10-08 14:24:27.526216854 +0000 UTC m=+46.348022263" lastFinishedPulling="2025-10-08 14:24:28.792507312 +0000 UTC m=+47.614312724" observedRunningTime="2025-10-08 14:24:29.398743884 +0000 UTC m=+48.220549303" watchObservedRunningTime="2025-10-08 14:24:29.399085602 +0000 UTC m=+48.220891019"
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-439307 -n multinode-439307
helpers_test.go:269: (dbg) Run: kubectl --context multinode-439307 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: kindnet-58vm5 kube-proxy-fs89g
helpers_test.go:282: ======> post-mortem[TestMultiNode/serial/AddNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context multinode-439307 describe pod kindnet-58vm5 kube-proxy-fs89g
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context multinode-439307 describe pod kindnet-58vm5 kube-proxy-fs89g: exit status 1 (62.0269ms)
** stderr **
Error from server (NotFound): pods "kindnet-58vm5" not found
Error from server (NotFound): pods "kube-proxy-fs89g" not found
** /stderr **
helpers_test.go:287: kubectl --context multinode-439307 describe pod kindnet-58vm5 kube-proxy-fs89g: exit status 1
--- FAIL: TestMultiNode/serial/AddNode (12.21s)