=== RUN TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run: out/minikube-linux-amd64 pause -p default-k8s-diff-port-948988 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-948988 --alsologtostderr -v=1: (1.393949759s)
start_stop_delete_test.go:309: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988: exit status 2 (15.779342086s)
-- stdout --
Stopped
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988
E1018 12:26:33.118930 9909 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/gvisor-073301/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988: exit status 2 (15.830711408s)
-- stdout --
Stopped
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-948988 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988
start_stop_delete_test.go:309: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p default-k8s-diff-port-948988 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-948988 logs -n 25: (2.340900628s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs:
-- stdout --
==> Audit <==
┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────
────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────
────────┤
│ stop │ -p default-k8s-diff-port-948988 --alsologtostderr -v=3 │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:24 UTC │ 18 Oct 25 12:25 UTC │
│ addons │ enable metrics-server -p embed-certs-270191 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ embed-certs-270191 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ addons │ enable metrics-server -p newest-cni-661287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ newest-cni-661287 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ stop │ -p embed-certs-270191 --alsologtostderr -v=3 │ embed-certs-270191 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ stop │ -p newest-cni-661287 --alsologtostderr -v=3 │ newest-cni-661287 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ addons │ enable dashboard -p default-k8s-diff-port-948988 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ start │ -p default-k8s-diff-port-948988 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2 --auto-update-drivers=false --kubernetes-version=v1.34.1 │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ start │ -p embed-certs-270191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2 --auto-update-drivers=false --kubernetes-version=v1.34.1 │ embed-certs-270191 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:26 UTC │
│ addons │ enable dashboard -p newest-cni-661287 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ newest-cni-661287 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ start │ -p newest-cni-661287 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2 --auto-update-drivers=false --kubernetes-version=v1.34.1 │ newest-cni-661287 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ │
│ image │ no-preload-839073 image list --format=json │ no-preload-839073 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ pause │ -p no-preload-839073 --alsologtostderr -v=1 │ no-preload-839073 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ unpause │ -p no-preload-839073 --alsologtostderr -v=1 │ no-preload-839073 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ delete │ -p no-preload-839073 │ no-preload-839073 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ delete │ -p no-preload-839073 │ no-preload-839073 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ start │ -p auto-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 --auto-update-drivers=false │ auto-720125 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ │
│ image │ default-k8s-diff-port-948988 image list --format=json │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
│ pause │ -p default-k8s-diff-port-948988 --alsologtostderr -v=1 │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
│ image │ embed-certs-270191 image list --format=json │ embed-certs-270191 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
│ pause │ -p embed-certs-270191 --alsologtostderr -v=1 │ embed-certs-270191 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
│ unpause │ -p embed-certs-270191 --alsologtostderr -v=1 │ embed-certs-270191 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
│ delete │ -p embed-certs-270191 │ embed-certs-270191 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
│ delete │ -p embed-certs-270191 │ embed-certs-270191 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
│ start │ -p kindnet-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 --auto-update-drivers=false │ kindnet-720125 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ │
│ unpause │ -p default-k8s-diff-port-948988 --alsologtostderr -v=1 │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────
────────┘
==> Last Start <==
Log file created at: 2025/10/18 12:26:39
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.24.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1018 12:26:39.638929 54024 out.go:360] Setting OutFile to fd 1 ...
I1018 12:26:39.639215 54024 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:26:39.639226 54024 out.go:374] Setting ErrFile to fd 2...
I1018 12:26:39.639232 54024 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:26:39.639463 54024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6010/.minikube/bin
I1018 12:26:39.639986 54024 out.go:368] Setting JSON to false
I1018 12:26:39.640948 54024 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4147,"bootTime":1760786253,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1018 12:26:39.641036 54024 start.go:141] virtualization: kvm guest
I1018 12:26:39.642912 54024 out.go:179] * [kindnet-720125] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1018 12:26:39.644319 54024 notify.go:220] Checking for updates...
I1018 12:26:39.644359 54024 out.go:179] - MINIKUBE_LOCATION=21647
I1018 12:26:39.645575 54024 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1018 12:26:39.646808 54024 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21647-6010/kubeconfig
I1018 12:26:39.647991 54024 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6010/.minikube
I1018 12:26:39.649134 54024 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1018 12:26:39.650480 54024 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1018 12:26:39.652192 54024 config.go:182] Loaded profile config "auto-720125": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 12:26:39.652340 54024 config.go:182] Loaded profile config "default-k8s-diff-port-948988": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 12:26:39.652479 54024 config.go:182] Loaded profile config "newest-cni-661287": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 12:26:39.652597 54024 driver.go:421] Setting default libvirt URI to qemu:///system
I1018 12:26:39.691700 54024 out.go:179] * Using the kvm2 driver based on user configuration
I1018 12:26:39.692905 54024 start.go:305] selected driver: kvm2
I1018 12:26:39.692920 54024 start.go:925] validating driver "kvm2" against <nil>
I1018 12:26:39.692931 54024 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1018 12:26:39.693690 54024 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1018 12:26:39.693776 54024 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6010/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1018 12:26:39.709001 54024 install.go:163] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.37.0
I1018 12:26:39.709030 54024 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6010/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1018 12:26:39.724060 54024 install.go:163] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.37.0
I1018 12:26:39.724111 54024 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1018 12:26:39.724397 54024 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1018 12:26:39.724424 54024 cni.go:84] Creating CNI manager for "kindnet"
I1018 12:26:39.724429 54024 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1018 12:26:39.724476 54024 start.go:349] cluster config:
{Name:kindnet-720125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-720125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
I1018 12:26:39.724562 54024 iso.go:125] acquiring lock: {Name:mk7b9977f44c882a06d0a932f05bd4c8e4cea871 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1018 12:26:39.726635 54024 out.go:179] * Starting "kindnet-720125" primary control-plane node in "kindnet-720125" cluster
I1018 12:26:39.727995 54024 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
I1018 12:26:39.728049 54024 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-6010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
I1018 12:26:39.728060 54024 cache.go:58] Caching tarball of preloaded images
I1018 12:26:39.728181 54024 preload.go:233] Found /home/jenkins/minikube-integration/21647-6010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1018 12:26:39.728194 54024 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
I1018 12:26:39.728350 54024 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/kindnet-720125/config.json ...
I1018 12:26:39.728376 54024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/kindnet-720125/config.json: {Name:mkf1b74ab9b12d679411e2c6e2e2149cae3e0078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 12:26:39.728580 54024 start.go:360] acquireMachinesLock for kindnet-720125: {Name:mk547bbf69b426adc37163c0f135f5803e3e7ae0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1018 12:26:39.728617 54024 start.go:364] duration metric: took 19.75µs to acquireMachinesLock for "kindnet-720125"
I1018 12:26:39.728642 54024 start.go:93] Provisioning new machine with config: &{Name:kindnet-720125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.34.1 ClusterName:kindnet-720125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I1018 12:26:39.728718 54024 start.go:125] createHost starting for "" (driver="kvm2")
I1018 12:26:35.461906 52813 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.481663654s)
I1018 12:26:35.461943 52813 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1018 12:26:35.505542 52813 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I1018 12:26:35.519942 52813 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
I1018 12:26:35.544751 52813 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
I1018 12:26:35.561575 52813 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1018 12:26:35.715918 52813 ssh_runner.go:195] Run: sudo systemctl restart docker
I1018 12:26:38.056356 52813 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.34040401s)
I1018 12:26:38.056485 52813 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1018 12:26:38.085796 52813 docker.go:691] Got preloaded images: -- stdout --
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/pause:3.10.1
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1018 12:26:38.085832 52813 cache_images.go:85] Images are preloaded, skipping loading
I1018 12:26:38.085846 52813 kubeadm.go:934] updating node { 192.168.72.13 8443 v1.34.1 docker true true} ...
I1018 12:26:38.085985 52813 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-720125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.13
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:auto-720125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1018 12:26:38.086071 52813 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1018 12:26:38.149565 52813 cni.go:84] Creating CNI manager for ""
I1018 12:26:38.149605 52813 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1018 12:26:38.149622 52813 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1018 12:26:38.149639 52813 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.13 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-720125 NodeName:auto-720125 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1018 12:26:38.149863 52813 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.72.13
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "auto-720125"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.72.13"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.72.13"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1018 12:26:38.149950 52813 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1018 12:26:38.167666 52813 binaries.go:44] Found k8s binaries, skipping transfer
I1018 12:26:38.167750 52813 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1018 12:26:38.182469 52813 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
I1018 12:26:38.210498 52813 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1018 12:26:38.235674 52813 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
I1018 12:26:38.272656 52813 ssh_runner.go:195] Run: grep 192.168.72.13 control-plane.minikube.internal$ /etc/hosts
I1018 12:26:38.278428 52813 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.13 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1018 12:26:38.295186 52813 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1018 12:26:38.477493 52813 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1018 12:26:38.516693 52813 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125 for IP: 192.168.72.13
I1018 12:26:38.516721 52813 certs.go:195] generating shared ca certs ...
I1018 12:26:38.516742 52813 certs.go:227] acquiring lock for ca certs: {Name:mk4e9b668d7f4a08d373c26a5a5beadd4b363eae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 12:26:38.516897 52813 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-6010/.minikube/ca.key
I1018 12:26:38.516956 52813 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-6010/.minikube/proxy-client-ca.key
I1018 12:26:38.516971 52813 certs.go:257] generating profile certs ...
I1018 12:26:38.517059 52813 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.key
I1018 12:26:38.517080 52813 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.crt with IP's: []
I1018 12:26:38.795006 52813 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.crt ...
I1018 12:26:38.795041 52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.crt: {Name:mke50b87cc8afab1bea24439b2b8f8b4fce785c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 12:26:38.795221 52813 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.key ...
I1018 12:26:38.795236 52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.key: {Name:mk73a13799ed8cba8c6cf5586dd849d9aa3376fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 12:26:38.795369 52813 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key.5f192319
I1018 12:26:38.795387 52813 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt.5f192319 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.13]
I1018 12:26:39.015985 52813 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt.5f192319 ...
I1018 12:26:39.016017 52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt.5f192319: {Name:mk48dc89d0bc936861c01af4faa11afa9b99fc7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 12:26:39.016173 52813 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key.5f192319 ...
I1018 12:26:39.016187 52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key.5f192319: {Name:mk06903a8537a759ab5885d9e1ce94cdbffcbf0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 12:26:39.016265 52813 certs.go:382] copying /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt.5f192319 -> /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt
I1018 12:26:39.016371 52813 certs.go:386] copying /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key.5f192319 -> /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key
I1018 12:26:39.016432 52813 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.key
I1018 12:26:39.016447 52813 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.crt with IP's: []
I1018 12:26:39.194387 52813 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.crt ...
I1018 12:26:39.194419 52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.crt: {Name:mk9243a20439ab9292d13a3cab98b56367a296c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 12:26:39.194631 52813 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.key ...
I1018 12:26:39.194649 52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.key: {Name:mk548ef445e4b58857c8694e04881f9da155116e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 12:26:39.194883 52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/certs/9909.pem (1338 bytes)
W1018 12:26:39.194965 52813 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-6010/.minikube/certs/9909_empty.pem, impossibly tiny 0 bytes
I1018 12:26:39.194982 52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/certs/ca-key.pem (1679 bytes)
I1018 12:26:39.195016 52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/certs/ca.pem (1082 bytes)
I1018 12:26:39.195051 52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/certs/cert.pem (1123 bytes)
I1018 12:26:39.195083 52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/certs/key.pem (1679 bytes)
I1018 12:26:39.195138 52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/files/etc/ssl/certs/99092.pem (1708 bytes)
I1018 12:26:39.195753 52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1018 12:26:39.237771 52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1018 12:26:39.273475 52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1018 12:26:39.304754 52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1018 12:26:39.340590 52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
I1018 12:26:39.375528 52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1018 12:26:39.408845 52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1018 12:26:39.442920 52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1018 12:26:39.481085 52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1018 12:26:39.516586 52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/certs/9909.pem --> /usr/share/ca-certificates/9909.pem (1338 bytes)
I1018 12:26:39.554538 52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/files/etc/ssl/certs/99092.pem --> /usr/share/ca-certificates/99092.pem (1708 bytes)
I1018 12:26:39.594522 52813 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1018 12:26:39.619184 52813 ssh_runner.go:195] Run: openssl version
I1018 12:26:39.626356 52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1018 12:26:39.640801 52813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1018 12:26:39.646535 52813 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
I1018 12:26:39.646588 52813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1018 12:26:39.654893 52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1018 12:26:39.669539 52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9909.pem && ln -fs /usr/share/ca-certificates/9909.pem /etc/ssl/certs/9909.pem"
I1018 12:26:39.684162 52813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9909.pem
I1018 12:26:39.689731 52813 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 11:35 /usr/share/ca-certificates/9909.pem
I1018 12:26:39.689790 52813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9909.pem
I1018 12:26:39.697600 52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9909.pem /etc/ssl/certs/51391683.0"
I1018 12:26:39.714166 52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99092.pem && ln -fs /usr/share/ca-certificates/99092.pem /etc/ssl/certs/99092.pem"
I1018 12:26:39.729837 52813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99092.pem
I1018 12:26:39.735419 52813 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 11:35 /usr/share/ca-certificates/99092.pem
I1018 12:26:39.735488 52813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99092.pem
I1018 12:26:39.743203 52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99092.pem /etc/ssl/certs/3ec20f2e.0"
I1018 12:26:39.758932 52813 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1018 12:26:39.765101 52813 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1018 12:26:39.765169 52813 kubeadm.go:400] StartCluster: {Name:auto-720125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clu
sterName:auto-720125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.13 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1018 12:26:39.765332 52813 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1018 12:26:39.785247 52813 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1018 12:26:39.798374 52813 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1018 12:26:39.810946 52813 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1018 12:26:39.825029 52813 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1018 12:26:39.825056 52813 kubeadm.go:157] found existing configuration files:
I1018 12:26:39.825096 52813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1018 12:26:39.836919 52813 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1018 12:26:39.836997 52813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1018 12:26:39.849872 52813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1018 12:26:39.861692 52813 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1018 12:26:39.861767 52813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1018 12:26:39.877485 52813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1018 12:26:39.890697 52813 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1018 12:26:39.890777 52813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1018 12:26:39.906568 52813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1018 12:26:39.920626 52813 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1018 12:26:39.920740 52813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1018 12:26:39.936398 52813 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1018 12:26:39.998219 52813 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
I1018 12:26:39.998340 52813 kubeadm.go:318] [preflight] Running pre-flight checks
I1018 12:26:40.111469 52813 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
I1018 12:26:40.111618 52813 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1018 12:26:40.111795 52813 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1018 12:26:40.128525 52813 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1018 12:26:40.130607 52813 out.go:252] - Generating certificates and keys ...
I1018 12:26:40.130710 52813 kubeadm.go:318] [certs] Using existing ca certificate authority
I1018 12:26:40.130803 52813 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
I1018 12:26:40.350726 52813 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
I1018 12:26:40.455768 52813 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
I1018 12:26:40.598243 52813 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
I1018 12:26:41.011504 52813 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
I1018 12:26:41.091757 52813 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
I1018 12:26:41.092141 52813 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-720125 localhost] and IPs [192.168.72.13 127.0.0.1 ::1]
I1018 12:26:41.376370 52813 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
I1018 12:26:41.376756 52813 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-720125 localhost] and IPs [192.168.72.13 127.0.0.1 ::1]
I1018 12:26:41.679155 52813 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
I1018 12:26:41.832796 52813 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
I1018 12:26:42.091476 52813 kubeadm.go:318] [certs] Generating "sa" key and public key
I1018 12:26:42.091617 52813 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1018 12:26:42.555206 52813 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
I1018 12:26:42.822944 52813 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1018 12:26:43.272107 52813 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1018 12:26:43.527688 52813 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1018 12:26:43.769537 52813 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1018 12:26:43.770332 52813 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1018 12:26:43.773363 52813 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1018 12:26:39.521607 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": read tcp 192.168.39.1:35984->192.168.39.140:8443: read: connection reset by peer
I1018 12:26:39.521660 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:39.522161 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:39.940469 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:39.941178 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:40.440329 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:40.441012 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:40.940495 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:40.941051 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:41.440547 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:41.441243 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:41.939828 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:41.940532 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:42.440175 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:42.440815 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:42.940483 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:42.941097 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:43.439852 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:43.440639 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:43.940431 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:43.941130 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:39.730484 54024 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
I1018 12:26:39.730631 54024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1018 12:26:39.730675 54024 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 12:26:39.746220 54024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38367
I1018 12:26:39.746691 54024 main.go:141] libmachine: () Calling .GetVersion
I1018 12:26:39.747252 54024 main.go:141] libmachine: Using API Version 1
I1018 12:26:39.747278 54024 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 12:26:39.747712 54024 main.go:141] libmachine: () Calling .GetMachineName
I1018 12:26:39.747910 54024 main.go:141] libmachine: (kindnet-720125) Calling .GetMachineName
I1018 12:26:39.748157 54024 main.go:141] libmachine: (kindnet-720125) Calling .DriverName
I1018 12:26:39.748327 54024 start.go:159] libmachine.API.Create for "kindnet-720125" (driver="kvm2")
I1018 12:26:39.748358 54024 client.go:168] LocalClient.Create starting
I1018 12:26:39.748391 54024 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-6010/.minikube/certs/ca.pem
I1018 12:26:39.748425 54024 main.go:141] libmachine: Decoding PEM data...
I1018 12:26:39.748441 54024 main.go:141] libmachine: Parsing certificate...
I1018 12:26:39.748493 54024 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-6010/.minikube/certs/cert.pem
I1018 12:26:39.748514 54024 main.go:141] libmachine: Decoding PEM data...
I1018 12:26:39.748527 54024 main.go:141] libmachine: Parsing certificate...
I1018 12:26:39.748542 54024 main.go:141] libmachine: Running pre-create checks...
I1018 12:26:39.748555 54024 main.go:141] libmachine: (kindnet-720125) Calling .PreCreateCheck
I1018 12:26:39.748883 54024 main.go:141] libmachine: (kindnet-720125) Calling .GetConfigRaw
I1018 12:26:39.749274 54024 main.go:141] libmachine: Creating machine...
I1018 12:26:39.749304 54024 main.go:141] libmachine: (kindnet-720125) Calling .Create
I1018 12:26:39.749445 54024 main.go:141] libmachine: (kindnet-720125) creating domain...
I1018 12:26:39.749466 54024 main.go:141] libmachine: (kindnet-720125) creating network...
I1018 12:26:39.750975 54024 main.go:141] libmachine: (kindnet-720125) DBG | found existing default network
I1018 12:26:39.751279 54024 main.go:141] libmachine: (kindnet-720125) DBG | <network connections='3'>
I1018 12:26:39.751320 54024 main.go:141] libmachine: (kindnet-720125) DBG | <name>default</name>
I1018 12:26:39.751345 54024 main.go:141] libmachine: (kindnet-720125) DBG | <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
I1018 12:26:39.751362 54024 main.go:141] libmachine: (kindnet-720125) DBG | <forward mode='nat'>
I1018 12:26:39.751384 54024 main.go:141] libmachine: (kindnet-720125) DBG | <nat>
I1018 12:26:39.751398 54024 main.go:141] libmachine: (kindnet-720125) DBG | <port start='1024' end='65535'/>
I1018 12:26:39.751406 54024 main.go:141] libmachine: (kindnet-720125) DBG | </nat>
I1018 12:26:39.751412 54024 main.go:141] libmachine: (kindnet-720125) DBG | </forward>
I1018 12:26:39.751448 54024 main.go:141] libmachine: (kindnet-720125) DBG | <bridge name='virbr0' stp='on' delay='0'/>
I1018 12:26:39.751488 54024 main.go:141] libmachine: (kindnet-720125) DBG | <mac address='52:54:00:10:a2:1d'/>
I1018 12:26:39.751506 54024 main.go:141] libmachine: (kindnet-720125) DBG | <ip address='192.168.122.1' netmask='255.255.255.0'>
I1018 12:26:39.751517 54024 main.go:141] libmachine: (kindnet-720125) DBG | <dhcp>
I1018 12:26:39.751527 54024 main.go:141] libmachine: (kindnet-720125) DBG | <range start='192.168.122.2' end='192.168.122.254'/>
I1018 12:26:39.751535 54024 main.go:141] libmachine: (kindnet-720125) DBG | </dhcp>
I1018 12:26:39.751543 54024 main.go:141] libmachine: (kindnet-720125) DBG | </ip>
I1018 12:26:39.751557 54024 main.go:141] libmachine: (kindnet-720125) DBG | </network>
I1018 12:26:39.751576 54024 main.go:141] libmachine: (kindnet-720125) DBG |
I1018 12:26:39.752366 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:39.752168 54053 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:82:24:f4} reservation:<nil>}
I1018 12:26:39.753108 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:39.753033 54053 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000260370}
I1018 12:26:39.753127 54024 main.go:141] libmachine: (kindnet-720125) DBG | defining private network:
I1018 12:26:39.753137 54024 main.go:141] libmachine: (kindnet-720125) DBG |
I1018 12:26:39.753143 54024 main.go:141] libmachine: (kindnet-720125) DBG | <network>
I1018 12:26:39.753152 54024 main.go:141] libmachine: (kindnet-720125) DBG | <name>mk-kindnet-720125</name>
I1018 12:26:39.753159 54024 main.go:141] libmachine: (kindnet-720125) DBG | <dns enable='no'/>
I1018 12:26:39.753168 54024 main.go:141] libmachine: (kindnet-720125) DBG | <ip address='192.168.50.1' netmask='255.255.255.0'>
I1018 12:26:39.753175 54024 main.go:141] libmachine: (kindnet-720125) DBG | <dhcp>
I1018 12:26:39.753184 54024 main.go:141] libmachine: (kindnet-720125) DBG | <range start='192.168.50.2' end='192.168.50.253'/>
I1018 12:26:39.753190 54024 main.go:141] libmachine: (kindnet-720125) DBG | </dhcp>
I1018 12:26:39.753213 54024 main.go:141] libmachine: (kindnet-720125) DBG | </ip>
I1018 12:26:39.753246 54024 main.go:141] libmachine: (kindnet-720125) DBG | </network>
I1018 12:26:39.753262 54024 main.go:141] libmachine: (kindnet-720125) DBG |
I1018 12:26:39.759190 54024 main.go:141] libmachine: (kindnet-720125) DBG | creating private network mk-kindnet-720125 192.168.50.0/24...
I1018 12:26:39.842530 54024 main.go:141] libmachine: (kindnet-720125) DBG | private network mk-kindnet-720125 192.168.50.0/24 created
I1018 12:26:39.842829 54024 main.go:141] libmachine: (kindnet-720125) DBG | <network>
I1018 12:26:39.842844 54024 main.go:141] libmachine: (kindnet-720125) DBG | <name>mk-kindnet-720125</name>
I1018 12:26:39.842855 54024 main.go:141] libmachine: (kindnet-720125) DBG | <uuid>57af09bd-510d-4d07-b5da-0d64b9c8c775</uuid>
I1018 12:26:39.842865 54024 main.go:141] libmachine: (kindnet-720125) setting up store path in /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125 ...
I1018 12:26:39.842873 54024 main.go:141] libmachine: (kindnet-720125) DBG | <bridge name='virbr2' stp='on' delay='0'/>
I1018 12:26:39.842883 54024 main.go:141] libmachine: (kindnet-720125) DBG | <mac address='52:54:00:4a:b8:f3'/>
I1018 12:26:39.842890 54024 main.go:141] libmachine: (kindnet-720125) DBG | <dns enable='no'/>
I1018 12:26:39.842900 54024 main.go:141] libmachine: (kindnet-720125) DBG | <ip address='192.168.50.1' netmask='255.255.255.0'>
I1018 12:26:39.842912 54024 main.go:141] libmachine: (kindnet-720125) DBG | <dhcp>
I1018 12:26:39.842920 54024 main.go:141] libmachine: (kindnet-720125) DBG | <range start='192.168.50.2' end='192.168.50.253'/>
I1018 12:26:39.842926 54024 main.go:141] libmachine: (kindnet-720125) DBG | </dhcp>
I1018 12:26:39.842937 54024 main.go:141] libmachine: (kindnet-720125) building disk image from file:///home/jenkins/minikube-integration/21647-6010/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
I1018 12:26:39.842947 54024 main.go:141] libmachine: (kindnet-720125) DBG | </ip>
I1018 12:26:39.842958 54024 main.go:141] libmachine: (kindnet-720125) DBG | </network>
I1018 12:26:39.842975 54024 main.go:141] libmachine: (kindnet-720125) Downloading /home/jenkins/minikube-integration/21647-6010/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21647-6010/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
I1018 12:26:39.842995 54024 main.go:141] libmachine: (kindnet-720125) DBG |
I1018 12:26:39.843018 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:39.842834 54053 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21647-6010/.minikube
I1018 12:26:40.099390 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:40.099247 54053 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/id_rsa...
I1018 12:26:40.381985 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:40.381830 54053 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/kindnet-720125.rawdisk...
I1018 12:26:40.382025 54024 main.go:141] libmachine: (kindnet-720125) DBG | Writing magic tar header
I1018 12:26:40.382039 54024 main.go:141] libmachine: (kindnet-720125) DBG | Writing SSH key tar header
I1018 12:26:40.382049 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:40.381994 54053 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125 ...
I1018 12:26:40.382145 54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125
I1018 12:26:40.382185 54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125 (perms=drwx------)
I1018 12:26:40.382204 54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins/minikube-integration/21647-6010/.minikube/machines (perms=drwxr-xr-x)
I1018 12:26:40.382225 54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6010/.minikube/machines
I1018 12:26:40.382245 54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6010/.minikube
I1018 12:26:40.382257 54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6010
I1018 12:26:40.382268 54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins/minikube-integration
I1018 12:26:40.382278 54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins
I1018 12:26:40.382302 54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins/minikube-integration/21647-6010/.minikube (perms=drwxr-xr-x)
I1018 12:26:40.382314 54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins/minikube-integration/21647-6010 (perms=drwxrwxr-x)
I1018 12:26:40.382334 54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home
I1018 12:26:40.382345 54024 main.go:141] libmachine: (kindnet-720125) DBG | skipping /home - not owner
I1018 12:26:40.382356 54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1018 12:26:40.382367 54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1018 12:26:40.382376 54024 main.go:141] libmachine: (kindnet-720125) defining domain...
I1018 12:26:40.383798 54024 main.go:141] libmachine: (kindnet-720125) defining domain using XML:
I1018 12:26:40.383831 54024 main.go:141] libmachine: (kindnet-720125) <domain type='kvm'>
I1018 12:26:40.383842 54024 main.go:141] libmachine: (kindnet-720125) <name>kindnet-720125</name>
I1018 12:26:40.383853 54024 main.go:141] libmachine: (kindnet-720125) <memory unit='MiB'>3072</memory>
I1018 12:26:40.383858 54024 main.go:141] libmachine: (kindnet-720125) <vcpu>2</vcpu>
I1018 12:26:40.383862 54024 main.go:141] libmachine: (kindnet-720125) <features>
I1018 12:26:40.383867 54024 main.go:141] libmachine: (kindnet-720125) <acpi/>
I1018 12:26:40.383875 54024 main.go:141] libmachine: (kindnet-720125) <apic/>
I1018 12:26:40.383882 54024 main.go:141] libmachine: (kindnet-720125) <pae/>
I1018 12:26:40.383886 54024 main.go:141] libmachine: (kindnet-720125) </features>
I1018 12:26:40.383891 54024 main.go:141] libmachine: (kindnet-720125) <cpu mode='host-passthrough'>
I1018 12:26:40.383898 54024 main.go:141] libmachine: (kindnet-720125) </cpu>
I1018 12:26:40.383905 54024 main.go:141] libmachine: (kindnet-720125) <os>
I1018 12:26:40.383916 54024 main.go:141] libmachine: (kindnet-720125) <type>hvm</type>
I1018 12:26:40.383924 54024 main.go:141] libmachine: (kindnet-720125) <boot dev='cdrom'/>
I1018 12:26:40.383934 54024 main.go:141] libmachine: (kindnet-720125) <boot dev='hd'/>
I1018 12:26:40.383944 54024 main.go:141] libmachine: (kindnet-720125) <bootmenu enable='no'/>
I1018 12:26:40.383948 54024 main.go:141] libmachine: (kindnet-720125) </os>
I1018 12:26:40.383953 54024 main.go:141] libmachine: (kindnet-720125) <devices>
I1018 12:26:40.383957 54024 main.go:141] libmachine: (kindnet-720125) <disk type='file' device='cdrom'>
I1018 12:26:40.383997 54024 main.go:141] libmachine: (kindnet-720125) <source file='/home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/boot2docker.iso'/>
I1018 12:26:40.384023 54024 main.go:141] libmachine: (kindnet-720125) <target dev='hdc' bus='scsi'/>
I1018 12:26:40.384037 54024 main.go:141] libmachine: (kindnet-720125) <readonly/>
I1018 12:26:40.384051 54024 main.go:141] libmachine: (kindnet-720125) </disk>
I1018 12:26:40.384065 54024 main.go:141] libmachine: (kindnet-720125) <disk type='file' device='disk'>
I1018 12:26:40.384079 54024 main.go:141] libmachine: (kindnet-720125) <driver name='qemu' type='raw' cache='default' io='threads' />
I1018 12:26:40.384096 54024 main.go:141] libmachine: (kindnet-720125) <source file='/home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/kindnet-720125.rawdisk'/>
I1018 12:26:40.384108 54024 main.go:141] libmachine: (kindnet-720125) <target dev='hda' bus='virtio'/>
I1018 12:26:40.384119 54024 main.go:141] libmachine: (kindnet-720125) </disk>
I1018 12:26:40.384133 54024 main.go:141] libmachine: (kindnet-720125) <interface type='network'>
I1018 12:26:40.384146 54024 main.go:141] libmachine: (kindnet-720125) <source network='mk-kindnet-720125'/>
I1018 12:26:40.384157 54024 main.go:141] libmachine: (kindnet-720125) <model type='virtio'/>
I1018 12:26:40.384168 54024 main.go:141] libmachine: (kindnet-720125) </interface>
I1018 12:26:40.384179 54024 main.go:141] libmachine: (kindnet-720125) <interface type='network'>
I1018 12:26:40.384192 54024 main.go:141] libmachine: (kindnet-720125) <source network='default'/>
I1018 12:26:40.384202 54024 main.go:141] libmachine: (kindnet-720125) <model type='virtio'/>
I1018 12:26:40.384216 54024 main.go:141] libmachine: (kindnet-720125) </interface>
I1018 12:26:40.384230 54024 main.go:141] libmachine: (kindnet-720125) <serial type='pty'>
I1018 12:26:40.384236 54024 main.go:141] libmachine: (kindnet-720125) <target port='0'/>
I1018 12:26:40.384245 54024 main.go:141] libmachine: (kindnet-720125) </serial>
I1018 12:26:40.384254 54024 main.go:141] libmachine: (kindnet-720125) <console type='pty'>
I1018 12:26:40.384266 54024 main.go:141] libmachine: (kindnet-720125) <target type='serial' port='0'/>
I1018 12:26:40.384277 54024 main.go:141] libmachine: (kindnet-720125) </console>
I1018 12:26:40.384304 54024 main.go:141] libmachine: (kindnet-720125) <rng model='virtio'>
I1018 12:26:40.384323 54024 main.go:141] libmachine: (kindnet-720125) <backend model='random'>/dev/random</backend>
I1018 12:26:40.384332 54024 main.go:141] libmachine: (kindnet-720125) </rng>
I1018 12:26:40.384340 54024 main.go:141] libmachine: (kindnet-720125) </devices>
I1018 12:26:40.384354 54024 main.go:141] libmachine: (kindnet-720125) </domain>
I1018 12:26:40.384364 54024 main.go:141] libmachine: (kindnet-720125)
I1018 12:26:40.388970 54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:3f:a0:78 in network default
I1018 12:26:40.389652 54024 main.go:141] libmachine: (kindnet-720125) starting domain...
I1018 12:26:40.389680 54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
I1018 12:26:40.389688 54024 main.go:141] libmachine: (kindnet-720125) ensuring networks are active...
I1018 12:26:40.390420 54024 main.go:141] libmachine: (kindnet-720125) Ensuring network default is active
I1018 12:26:40.390825 54024 main.go:141] libmachine: (kindnet-720125) Ensuring network mk-kindnet-720125 is active
I1018 12:26:40.391737 54024 main.go:141] libmachine: (kindnet-720125) getting domain XML...
I1018 12:26:40.393514 54024 main.go:141] libmachine: (kindnet-720125) DBG | starting domain XML:
I1018 12:26:40.393530 54024 main.go:141] libmachine: (kindnet-720125) DBG | <domain type='kvm'>
I1018 12:26:40.393539 54024 main.go:141] libmachine: (kindnet-720125) DBG | <name>kindnet-720125</name>
I1018 12:26:40.393548 54024 main.go:141] libmachine: (kindnet-720125) DBG | <uuid>d3c666c7-5967-40a8-9b36-6cfb4dcc1fb1</uuid>
I1018 12:26:40.393556 54024 main.go:141] libmachine: (kindnet-720125) DBG | <memory unit='KiB'>3145728</memory>
I1018 12:26:40.393564 54024 main.go:141] libmachine: (kindnet-720125) DBG | <currentMemory unit='KiB'>3145728</currentMemory>
I1018 12:26:40.393573 54024 main.go:141] libmachine: (kindnet-720125) DBG | <vcpu placement='static'>2</vcpu>
I1018 12:26:40.393580 54024 main.go:141] libmachine: (kindnet-720125) DBG | <os>
I1018 12:26:40.393593 54024 main.go:141] libmachine: (kindnet-720125) DBG | <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
I1018 12:26:40.393629 54024 main.go:141] libmachine: (kindnet-720125) DBG | <boot dev='cdrom'/>
I1018 12:26:40.393654 54024 main.go:141] libmachine: (kindnet-720125) DBG | <boot dev='hd'/>
I1018 12:26:40.393666 54024 main.go:141] libmachine: (kindnet-720125) DBG | <bootmenu enable='no'/>
I1018 12:26:40.393675 54024 main.go:141] libmachine: (kindnet-720125) DBG | </os>
I1018 12:26:40.393682 54024 main.go:141] libmachine: (kindnet-720125) DBG | <features>
I1018 12:26:40.393690 54024 main.go:141] libmachine: (kindnet-720125) DBG | <acpi/>
I1018 12:26:40.393698 54024 main.go:141] libmachine: (kindnet-720125) DBG | <apic/>
I1018 12:26:40.393707 54024 main.go:141] libmachine: (kindnet-720125) DBG | <pae/>
I1018 12:26:40.393717 54024 main.go:141] libmachine: (kindnet-720125) DBG | </features>
I1018 12:26:40.393726 54024 main.go:141] libmachine: (kindnet-720125) DBG | <cpu mode='host-passthrough' check='none' migratable='on'/>
I1018 12:26:40.393736 54024 main.go:141] libmachine: (kindnet-720125) DBG | <clock offset='utc'/>
I1018 12:26:40.393745 54024 main.go:141] libmachine: (kindnet-720125) DBG | <on_poweroff>destroy</on_poweroff>
I1018 12:26:40.393755 54024 main.go:141] libmachine: (kindnet-720125) DBG | <on_reboot>restart</on_reboot>
I1018 12:26:40.393764 54024 main.go:141] libmachine: (kindnet-720125) DBG | <on_crash>destroy</on_crash>
I1018 12:26:40.393774 54024 main.go:141] libmachine: (kindnet-720125) DBG | <devices>
I1018 12:26:40.393805 54024 main.go:141] libmachine: (kindnet-720125) DBG | <emulator>/usr/bin/qemu-system-x86_64</emulator>
I1018 12:26:40.393828 54024 main.go:141] libmachine: (kindnet-720125) DBG | <disk type='file' device='cdrom'>
I1018 12:26:40.393841 54024 main.go:141] libmachine: (kindnet-720125) DBG | <driver name='qemu' type='raw'/>
I1018 12:26:40.393857 54024 main.go:141] libmachine: (kindnet-720125) DBG | <source file='/home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/boot2docker.iso'/>
I1018 12:26:40.393871 54024 main.go:141] libmachine: (kindnet-720125) DBG | <target dev='hdc' bus='scsi'/>
I1018 12:26:40.393896 54024 main.go:141] libmachine: (kindnet-720125) DBG | <readonly/>
I1018 12:26:40.393912 54024 main.go:141] libmachine: (kindnet-720125) DBG | <address type='drive' controller='0' bus='0' target='0' unit='2'/>
I1018 12:26:40.393927 54024 main.go:141] libmachine: (kindnet-720125) DBG | </disk>
I1018 12:26:40.393940 54024 main.go:141] libmachine: (kindnet-720125) DBG | <disk type='file' device='disk'>
I1018 12:26:40.393952 54024 main.go:141] libmachine: (kindnet-720125) DBG | <driver name='qemu' type='raw' io='threads'/>
I1018 12:26:40.393965 54024 main.go:141] libmachine: (kindnet-720125) DBG | <source file='/home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/kindnet-720125.rawdisk'/>
I1018 12:26:40.393971 54024 main.go:141] libmachine: (kindnet-720125) DBG | <target dev='hda' bus='virtio'/>
I1018 12:26:40.393982 54024 main.go:141] libmachine: (kindnet-720125) DBG | <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
I1018 12:26:40.393987 54024 main.go:141] libmachine: (kindnet-720125) DBG | </disk>
I1018 12:26:40.393996 54024 main.go:141] libmachine: (kindnet-720125) DBG | <controller type='usb' index='0' model='piix3-uhci'>
I1018 12:26:40.394012 54024 main.go:141] libmachine: (kindnet-720125) DBG | <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
I1018 12:26:40.394022 54024 main.go:141] libmachine: (kindnet-720125) DBG | </controller>
I1018 12:26:40.394034 54024 main.go:141] libmachine: (kindnet-720125) DBG | <controller type='pci' index='0' model='pci-root'/>
I1018 12:26:40.394049 54024 main.go:141] libmachine: (kindnet-720125) DBG | <controller type='scsi' index='0' model='lsilogic'>
I1018 12:26:40.394062 54024 main.go:141] libmachine: (kindnet-720125) DBG | <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
I1018 12:26:40.394074 54024 main.go:141] libmachine: (kindnet-720125) DBG | </controller>
I1018 12:26:40.394090 54024 main.go:141] libmachine: (kindnet-720125) DBG | <interface type='network'>
I1018 12:26:40.394101 54024 main.go:141] libmachine: (kindnet-720125) DBG | <mac address='52:54:00:0e:b7:f4'/>
I1018 12:26:40.394112 54024 main.go:141] libmachine: (kindnet-720125) DBG | <source network='mk-kindnet-720125'/>
I1018 12:26:40.394129 54024 main.go:141] libmachine: (kindnet-720125) DBG | <model type='virtio'/>
I1018 12:26:40.394144 54024 main.go:141] libmachine: (kindnet-720125) DBG | <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
I1018 12:26:40.394159 54024 main.go:141] libmachine: (kindnet-720125) DBG | </interface>
I1018 12:26:40.394175 54024 main.go:141] libmachine: (kindnet-720125) DBG | <interface type='network'>
I1018 12:26:40.394193 54024 main.go:141] libmachine: (kindnet-720125) DBG | <mac address='52:54:00:3f:a0:78'/>
I1018 12:26:40.394204 54024 main.go:141] libmachine: (kindnet-720125) DBG | <source network='default'/>
I1018 12:26:40.394215 54024 main.go:141] libmachine: (kindnet-720125) DBG | <model type='virtio'/>
I1018 12:26:40.394226 54024 main.go:141] libmachine: (kindnet-720125) DBG | <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
I1018 12:26:40.394235 54024 main.go:141] libmachine: (kindnet-720125) DBG | </interface>
I1018 12:26:40.394244 54024 main.go:141] libmachine: (kindnet-720125) DBG | <serial type='pty'>
I1018 12:26:40.394254 54024 main.go:141] libmachine: (kindnet-720125) DBG | <target type='isa-serial' port='0'>
I1018 12:26:40.394281 54024 main.go:141] libmachine: (kindnet-720125) DBG | <model name='isa-serial'/>
I1018 12:26:40.394319 54024 main.go:141] libmachine: (kindnet-720125) DBG | </target>
I1018 12:26:40.394338 54024 main.go:141] libmachine: (kindnet-720125) DBG | </serial>
I1018 12:26:40.394356 54024 main.go:141] libmachine: (kindnet-720125) DBG | <console type='pty'>
I1018 12:26:40.394370 54024 main.go:141] libmachine: (kindnet-720125) DBG | <target type='serial' port='0'/>
I1018 12:26:40.394380 54024 main.go:141] libmachine: (kindnet-720125) DBG | </console>
I1018 12:26:40.394393 54024 main.go:141] libmachine: (kindnet-720125) DBG | <input type='mouse' bus='ps2'/>
I1018 12:26:40.394402 54024 main.go:141] libmachine: (kindnet-720125) DBG | <input type='keyboard' bus='ps2'/>
I1018 12:26:40.394415 54024 main.go:141] libmachine: (kindnet-720125) DBG | <audio id='1' type='none'/>
I1018 12:26:40.394423 54024 main.go:141] libmachine: (kindnet-720125) DBG | <memballoon model='virtio'>
I1018 12:26:40.394443 54024 main.go:141] libmachine: (kindnet-720125) DBG | <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
I1018 12:26:40.394459 54024 main.go:141] libmachine: (kindnet-720125) DBG | </memballoon>
I1018 12:26:40.394470 54024 main.go:141] libmachine: (kindnet-720125) DBG | <rng model='virtio'>
I1018 12:26:40.394482 54024 main.go:141] libmachine: (kindnet-720125) DBG | <backend model='random'>/dev/random</backend>
I1018 12:26:40.394496 54024 main.go:141] libmachine: (kindnet-720125) DBG | <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
I1018 12:26:40.394505 54024 main.go:141] libmachine: (kindnet-720125) DBG | </rng>
I1018 12:26:40.394513 54024 main.go:141] libmachine: (kindnet-720125) DBG | </devices>
I1018 12:26:40.394522 54024 main.go:141] libmachine: (kindnet-720125) DBG | </domain>
I1018 12:26:40.394542 54024 main.go:141] libmachine: (kindnet-720125) DBG |
I1018 12:26:41.782659 54024 main.go:141] libmachine: (kindnet-720125) waiting for domain to start...
I1018 12:26:41.784057 54024 main.go:141] libmachine: (kindnet-720125) domain is now running
I1018 12:26:41.784080 54024 main.go:141] libmachine: (kindnet-720125) waiting for IP...
I1018 12:26:41.784831 54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
I1018 12:26:41.785431 54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
I1018 12:26:41.785459 54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
I1018 12:26:41.785812 54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
I1018 12:26:41.785887 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:41.785810 54053 retry.go:31] will retry after 204.388807ms: waiting for domain to come up
I1018 12:26:41.992592 54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
I1018 12:26:41.993377 54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
I1018 12:26:41.993404 54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
I1018 12:26:41.993817 54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
I1018 12:26:41.993887 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:41.993817 54053 retry.go:31] will retry after 374.842513ms: waiting for domain to come up
I1018 12:26:42.370189 54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
I1018 12:26:42.370750 54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
I1018 12:26:42.370778 54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
I1018 12:26:42.371199 54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
I1018 12:26:42.371231 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:42.371171 54053 retry.go:31] will retry after 382.206082ms: waiting for domain to come up
I1018 12:26:42.755732 54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
I1018 12:26:42.756456 54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
I1018 12:26:42.756481 54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
I1018 12:26:42.756848 54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
I1018 12:26:42.756877 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:42.756832 54053 retry.go:31] will retry after 434.513358ms: waiting for domain to come up
I1018 12:26:43.192495 54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
I1018 12:26:43.193112 54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
I1018 12:26:43.193137 54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
I1018 12:26:43.193557 54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
I1018 12:26:43.193584 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:43.193492 54053 retry.go:31] will retry after 622.396959ms: waiting for domain to come up
I1018 12:26:43.818233 54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
I1018 12:26:43.819067 54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
I1018 12:26:43.819104 54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
I1018 12:26:43.819584 54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
I1018 12:26:43.819616 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:43.819536 54053 retry.go:31] will retry after 815.894877ms: waiting for domain to come up
I1018 12:26:44.636575 54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
I1018 12:26:44.637323 54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
I1018 12:26:44.637353 54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
I1018 12:26:44.637721 54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
I1018 12:26:44.637759 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:44.637705 54053 retry.go:31] will retry after 1.067259778s: waiting for domain to come up
I1018 12:26:43.775588 52813 out.go:252] - Booting up control plane ...
I1018 12:26:43.775698 52813 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1018 12:26:43.775800 52813 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1018 12:26:43.777341 52813 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1018 12:26:43.800502 52813 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1018 12:26:43.800688 52813 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1018 12:26:43.808677 52813 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1018 12:26:43.808867 52813 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1018 12:26:43.809016 52813 kubeadm.go:318] [kubelet-start] Starting the kubelet
I1018 12:26:43.996155 52813 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1018 12:26:43.996352 52813 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1018 12:26:44.997230 52813 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001669295s
I1018 12:26:45.000531 52813 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1018 12:26:45.000667 52813 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.72.13:8443/livez
I1018 12:26:45.000814 52813 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1018 12:26:45.000947 52813 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1018 12:26:44.439803 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:44.440530 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:44.940153 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:44.940832 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:45.439761 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:45.440519 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:45.940122 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:45.940844 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:46.439543 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:46.440225 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:46.939926 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:46.940690 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:47.440072 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:47.440765 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:47.940122 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:47.940902 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:48.440476 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:48.441175 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:48.940453 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:48.941104 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
==> Docker <==
Oct 18 12:25:51 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:25:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2bf7782642e4711b15be6d3ec08d29a271276dc02c8b8205befe59a7505897ae/resolv.conf as [nameserver 192.168.122.1]"
Oct 18 12:25:53 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:25:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/307c8c80145ed27dca61950ef5cf63b804994215fc5f4759617dd3e150ef2cfa/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
Oct 18 12:25:53 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:25:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/22320121e1a756b48dc7f5c15a1a3cb7252ccd513e0ab07d47c606f58c53f0f0/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
Oct 18 12:25:54 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:25:54.120729117Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Oct 18 12:25:54 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:25:54.212112555Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Oct 18 12:25:54 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:25:54.212342190Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Oct 18 12:25:54 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:25:54Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
Oct 18 12:25:54 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:25:54.421865126Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Oct 18 12:26:02 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:26:02Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.830994794Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.904996286Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.905088942Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Oct 18 12:26:06 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:26:06Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.919653355Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.919692389Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.923070969Z" level=error msg="unexpected HTTP error handling" error="<nil>"
Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.924597650Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Oct 18 12:26:14 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:14.766195371Z" level=info msg="ignoring event" container=28ffefdfcaefaa0dcc5a6077bf470cdb9475d6e21b7a7d96be86de74a8777734 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 18 12:26:48 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:26:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-jc7tz_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"50ccc6bf5c1dc8dbc44839aac4aaf80b91e88cfa36a35e71c99ecbc99a5d2efb\""
Oct 18 12:26:48 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:26:48Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
Oct 18 12:26:49 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:49.579823134Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Oct 18 12:26:49 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:49.579851904Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Oct 18 12:26:49 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:49.584080633Z" level=error msg="unexpected HTTP error handling" error="<nil>"
Oct 18 12:26:49 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:49.584132115Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Oct 18 12:26:49 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:49.670933568Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
3a2c1a468e77b kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 48 seconds ago Running kubernetes-dashboard 0 22320121e1a75 kubernetes-dashboard-855c9754f9-8frzf
14a606bd02ea2 52546a367cc9e 58 seconds ago Running coredns 1 2bf7782642e47 coredns-66bc5c9577-s7znr
3181063a95749 56cc512116c8f 58 seconds ago Running busybox 1 f01a1904eab6f busybox
28ffefdfcaefa 6e38f40d628db About a minute ago Exited storage-provisioner 1 002d263a57e06 storage-provisioner
e74b601e6b20b fc25172553d79 About a minute ago Running kube-proxy 1 5916362f7151c kube-proxy-hmf6q
aa45133c5292e 7dd6aaa1717ab About a minute ago Running kube-scheduler 1 c386eff006256 kube-scheduler-default-k8s-diff-port-948988
0d33563cfd415 5f1f5298c888d About a minute ago Running etcd 1 aa5a738a016e1 etcd-default-k8s-diff-port-948988
482f645840fbd c3994bc696102 About a minute ago Running kube-apiserver 1 6d80f3bf62181 kube-apiserver-default-k8s-diff-port-948988
cbcb65b91df5f c80c8dbafe7dd About a minute ago Running kube-controller-manager 1 9b74e777c1d81 kube-controller-manager-default-k8s-diff-port-948988
06b0d6a0fe73a gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e About a minute ago Exited busybox 0 02768f34f11ea busybox
bf61d222c7e61 52546a367cc9e 2 minutes ago Exited coredns 0 4a9e23fe5352b coredns-66bc5c9577-s7znr
72d0dd1b3e6d1 fc25172553d79 2 minutes ago Exited kube-proxy 0 3b1b31ff39772 kube-proxy-hmf6q
ac171ed99aa7b 7dd6aaa1717ab 2 minutes ago Exited kube-scheduler 0 27f94a06346ec kube-scheduler-default-k8s-diff-port-948988
07dc691cd2b41 c80c8dbafe7dd 2 minutes ago Exited kube-controller-manager 0 7c2c9ab301ac9 kube-controller-manager-default-k8s-diff-port-948988
5a3d271b1a7a4 5f1f5298c888d 2 minutes ago Exited etcd 0 7776a7d62b3b1 etcd-default-k8s-diff-port-948988
5dfc625534d2e c3994bc696102 2 minutes ago Exited kube-apiserver 0 20ac876b72a06 kube-apiserver-default-k8s-diff-port-948988
==> coredns [14a606bd02ea] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:47328 - 15007 "HINFO IN 5766678739025722613.5866360335637854453. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.103273346s
==> coredns [bf61d222c7e6] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:48576 - 64076 "HINFO IN 6932009071857870960.7176900972779109838. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.13763s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: default-k8s-diff-port-948988
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=default-k8s-diff-port-948988
kubernetes.io/os=linux
minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
minikube.k8s.io/name=default-k8s-diff-port-948988
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_10_18T12_24_33_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 18 Oct 2025 12:24:29 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: default-k8s-diff-port-948988
AcquireTime: <unset>
RenewTime: Sat, 18 Oct 2025 12:26:48 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 18 Oct 2025 12:26:48 +0000 Sat, 18 Oct 2025 12:24:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 18 Oct 2025 12:26:48 +0000 Sat, 18 Oct 2025 12:24:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 18 Oct 2025 12:26:48 +0000 Sat, 18 Oct 2025 12:24:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 18 Oct 2025 12:26:48 +0000 Sat, 18 Oct 2025 12:25:53 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.61.154
Hostname: default-k8s-diff-port-948988
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3042712Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3042712Ki
pods: 110
System Info:
Machine ID: d7b095482f0f4bd294376564492aae84
System UUID: d7b09548-2f0f-4bd2-9437-6564492aae84
Boot ID: 5dbb338e-d666-4176-8009-ddf389982046
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://28.5.1
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m3s
kube-system coredns-66bc5c9577-s7znr 100m (5%) 0 (0%) 70Mi (2%) 170Mi (5%) 2m11s
kube-system etcd-default-k8s-diff-port-948988 100m (5%) 0 (0%) 100Mi (3%) 0 (0%) 2m19s
kube-system kube-apiserver-default-k8s-diff-port-948988 250m (12%) 0 (0%) 0 (0%) 0 (0%) 2m19s
kube-system kube-controller-manager-default-k8s-diff-port-948988 200m (10%) 0 (0%) 0 (0%) 0 (0%) 2m19s
kube-system kube-proxy-hmf6q 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m13s
kube-system kube-scheduler-default-k8s-diff-port-948988 100m (5%) 0 (0%) 0 (0%) 0 (0%) 2m20s
kube-system metrics-server-746fcd58dc-7788d 100m (5%) 0 (0%) 200Mi (6%) 0 (0%) 112s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m11s
kubernetes-dashboard dashboard-metrics-scraper-6ffb444bf9-gxs6s 0 (0%) 0 (0%) 0 (0%) 0 (0%) 64s
kubernetes-dashboard kubernetes-dashboard-855c9754f9-8frzf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 64s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 370Mi (12%) 170Mi (5%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m9s kube-proxy
Normal Starting 64s kube-proxy
Normal Starting 2m27s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m26s (x8 over 2m26s) kubelet Node default-k8s-diff-port-948988 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m26s (x8 over 2m26s) kubelet Node default-k8s-diff-port-948988 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m26s (x7 over 2m26s) kubelet Node default-k8s-diff-port-948988 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m26s kubelet Updated Node Allocatable limit across pods
Normal Starting 2m19s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 2m19s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 2m19s kubelet Node default-k8s-diff-port-948988 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m19s kubelet Node default-k8s-diff-port-948988 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m19s kubelet Node default-k8s-diff-port-948988 status is now: NodeHasSufficientPID
Normal NodeReady 2m15s kubelet Node default-k8s-diff-port-948988 status is now: NodeReady
Normal RegisteredNode 2m14s node-controller Node default-k8s-diff-port-948988 event: Registered Node default-k8s-diff-port-948988 in Controller
Normal Starting 73s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 73s (x8 over 73s) kubelet Node default-k8s-diff-port-948988 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 73s (x8 over 73s) kubelet Node default-k8s-diff-port-948988 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 73s (x7 over 73s) kubelet Node default-k8s-diff-port-948988 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 73s kubelet Updated Node Allocatable limit across pods
Warning Rebooted 69s kubelet Node default-k8s-diff-port-948988 has been rebooted, boot id: 5dbb338e-d666-4176-8009-ddf389982046
Normal RegisteredNode 65s node-controller Node default-k8s-diff-port-948988 event: Registered Node default-k8s-diff-port-948988 in Controller
Normal Starting 3s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 3s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 3s kubelet Node default-k8s-diff-port-948988 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3s kubelet Node default-k8s-diff-port-948988 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3s kubelet Node default-k8s-diff-port-948988 status is now: NodeHasSufficientPID
==> dmesg <==
[Oct18 12:25] Booted with the nomodeset parameter. Only the system framebuffer will be available
[ +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
[ +0.001590] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +0.004075] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
[ +0.931702] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
[ +0.130272] kauditd_printk_skb: 1 callbacks suppressed
[ +0.102368] kauditd_printk_skb: 449 callbacks suppressed
[ +5.669077] kauditd_printk_skb: 165 callbacks suppressed
[ +5.952206] kauditd_printk_skb: 134 callbacks suppressed
[ +2.969146] kauditd_printk_skb: 264 callbacks suppressed
[Oct18 12:26] kauditd_printk_skb: 11 callbacks suppressed
[ +0.224441] kauditd_printk_skb: 35 callbacks suppressed
==> etcd [0d33563cfd41] <==
{"level":"info","ts":"2025-10-18T12:26:50.186827Z","caller":"traceutil/trace.go:172","msg":"trace[1372174769] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:752; }","duration":"399.841982ms","start":"2025-10-18T12:26:49.786974Z","end":"2025-10-18T12:26:50.186816Z","steps":["trace[1372174769] 'range keys from in-memory index tree' (duration: 399.699339ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-18T12:26:50.186874Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.786955Z","time spent":"399.895498ms","remote":"127.0.0.1:58530","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
{"level":"info","ts":"2025-10-18T12:26:50.333810Z","caller":"traceutil/trace.go:172","msg":"trace[111824645] linearizableReadLoop","detail":"{readStateIndex:805; appliedIndex:805; }","duration":"469.70081ms","start":"2025-10-18T12:26:49.864083Z","end":"2025-10-18T12:26:50.333784Z","steps":["trace[111824645] 'read index received' (duration: 469.662848ms)","trace[111824645] 'applied index is now lower than readState.Index' (duration: 36.562µs)"],"step_count":2}
{"level":"warn","ts":"2025-10-18T12:26:50.333966Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"469.888536ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-10-18T12:26:50.334000Z","caller":"traceutil/trace.go:172","msg":"trace[512175939] range","detail":"{range_begin:/registry/flowschemas; range_end:; response_count:0; response_revision:752; }","duration":"469.93891ms","start":"2025-10-18T12:26:49.864053Z","end":"2025-10-18T12:26:50.333992Z","steps":["trace[512175939] 'agreement among raft nodes before linearized reading' (duration: 469.85272ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-18T12:26:50.334133Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.864029Z","time spent":"469.995ms","remote":"127.0.0.1:59436","response type":"/etcdserverpb.KV/Range","request count":0,"request size":25,"response count":0,"response size":27,"request content":"key:\"/registry/flowschemas\" limit:1 "}
{"level":"info","ts":"2025-10-18T12:26:50.334869Z","caller":"traceutil/trace.go:172","msg":"trace[1055338688] transaction","detail":"{read_only:false; number_of_response:0; response_revision:752; }","duration":"495.901712ms","start":"2025-10-18T12:26:49.838955Z","end":"2025-10-18T12:26:50.334857Z","steps":["trace[1055338688] 'process raft request' (duration: 495.716875ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-18T12:26:50.335648Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.838929Z","time spent":"495.989792ms","remote":"127.0.0.1:58854","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":27,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-948988\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-948988\" value_size:3336 >> failure:<>"}
{"level":"info","ts":"2025-10-18T12:26:50.443549Z","caller":"traceutil/trace.go:172","msg":"trace[381001447] linearizableReadLoop","detail":"{readStateIndex:806; appliedIndex:806; }","duration":"109.522762ms","start":"2025-10-18T12:26:50.333879Z","end":"2025-10-18T12:26:50.443401Z","steps":["trace[381001447] 'read index received' (duration: 109.304835ms)","trace[381001447] 'applied index is now lower than readState.Index' (duration: 216.349µs)"],"step_count":2}
{"level":"warn","ts":"2025-10-18T12:26:50.443898Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"254.661283ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-10-18T12:26:50.444087Z","caller":"traceutil/trace.go:172","msg":"trace[269629089] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:752; }","duration":"254.861648ms","start":"2025-10-18T12:26:50.189213Z","end":"2025-10-18T12:26:50.444075Z","steps":["trace[269629089] 'agreement among raft nodes before linearized reading' (duration: 254.569015ms)"],"step_count":1}
{"level":"info","ts":"2025-10-18T12:26:50.444986Z","caller":"traceutil/trace.go:172","msg":"trace[1424081342] transaction","detail":"{read_only:false; number_of_response:0; response_revision:752; }","duration":"604.238859ms","start":"2025-10-18T12:26:49.840736Z","end":"2025-10-18T12:26:50.444975Z","steps":["trace[1424081342] 'process raft request' (duration: 603.242308ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-18T12:26:50.445058Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"481.542092ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2025-10-18T12:26:50.445075Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.840723Z","time spent":"604.304586ms","remote":"127.0.0.1:58854","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":27,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-948988\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-948988\" value_size:5080 >> failure:<>"}
{"level":"info","ts":"2025-10-18T12:26:50.445122Z","caller":"traceutil/trace.go:172","msg":"trace[399968637] range","detail":"{range_begin:/registry/certificatesigningrequests; range_end:; response_count:0; response_revision:752; }","duration":"481.574042ms","start":"2025-10-18T12:26:49.963502Z","end":"2025-10-18T12:26:50.445076Z","steps":["trace[399968637] 'agreement among raft nodes before linearized reading' (duration: 481.324719ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-18T12:26:50.445200Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.963483Z","time spent":"481.704642ms","remote":"127.0.0.1:58990","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":0,"response size":27,"request content":"key:\"/registry/certificatesigningrequests\" limit:1 "}
{"level":"info","ts":"2025-10-18T12:26:50.446712Z","caller":"traceutil/trace.go:172","msg":"trace[824860143] transaction","detail":"{read_only:false; number_of_response:0; response_revision:752; }","duration":"606.054697ms","start":"2025-10-18T12:26:49.840601Z","end":"2025-10-18T12:26:50.446656Z","steps":["trace[824860143] 'process raft request' (duration: 603.007187ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-18T12:26:50.446779Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.840584Z","time spent":"606.160126ms","remote":"127.0.0.1:58854","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":27,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-948988\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-948988\" value_size:5531 >> failure:<>"}
{"level":"info","ts":"2025-10-18T12:26:50.446897Z","caller":"traceutil/trace.go:172","msg":"trace[1942397087] transaction","detail":"{read_only:false; number_of_response:0; response_revision:752; }","duration":"606.190325ms","start":"2025-10-18T12:26:49.840699Z","end":"2025-10-18T12:26:50.446890Z","steps":["trace[1942397087] 'process raft request' (duration: 603.239357ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-18T12:26:50.446935Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.840694Z","time spent":"606.222506ms","remote":"127.0.0.1:58854","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":27,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-948988\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-948988\" value_size:4413 >> failure:<>"}
{"level":"warn","ts":"2025-10-18T12:26:50.446998Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.548699ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-948988\" limit:1 ","response":"range_response_count:1 size:4976"}
{"level":"info","ts":"2025-10-18T12:26:50.447420Z","caller":"traceutil/trace.go:172","msg":"trace[673088281] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-948988; range_end:; response_count:1; response_revision:753; }","duration":"106.587183ms","start":"2025-10-18T12:26:50.340430Z","end":"2025-10-18T12:26:50.447017Z","steps":["trace[673088281] 'agreement among raft nodes before linearized reading' (duration: 106.46749ms)"],"step_count":1}
{"level":"info","ts":"2025-10-18T12:26:50.448436Z","caller":"traceutil/trace.go:172","msg":"trace[1596410668] transaction","detail":"{read_only:false; response_revision:753; number_of_response:1; }","duration":"250.464751ms","start":"2025-10-18T12:26:50.197959Z","end":"2025-10-18T12:26:50.448424Z","steps":["trace[1596410668] 'process raft request' (duration: 246.217803ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-18T12:26:50.448558Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.631999ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-10-18T12:26:50.448589Z","caller":"traceutil/trace.go:172","msg":"trace[1722869229] range","detail":"{range_begin:/registry/runtimeclasses; range_end:; response_count:0; response_revision:753; }","duration":"100.661173ms","start":"2025-10-18T12:26:50.347914Z","end":"2025-10-18T12:26:50.448575Z","steps":["trace[1722869229] 'agreement among raft nodes before linearized reading' (duration: 100.605021ms)"],"step_count":1}
==> etcd [5a3d271b1a7a] <==
{"level":"info","ts":"2025-10-18T12:24:40.137898Z","caller":"traceutil/trace.go:172","msg":"trace[1031995627] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"153.504515ms","start":"2025-10-18T12:24:39.984387Z","end":"2025-10-18T12:24:40.137891Z","steps":["trace[1031995627] 'process raft request' (duration: 106.790781ms)","trace[1031995627] 'compare' (duration: 46.286033ms)"],"step_count":2}
{"level":"info","ts":"2025-10-18T12:24:40.138807Z","caller":"traceutil/trace.go:172","msg":"trace[2073145057] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"154.722362ms","start":"2025-10-18T12:24:39.984073Z","end":"2025-10-18T12:24:40.138795Z","steps":["trace[2073145057] 'process raft request' (duration: 153.550593ms)"],"step_count":1}
{"level":"info","ts":"2025-10-18T12:24:40.138990Z","caller":"traceutil/trace.go:172","msg":"trace[460852249] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"147.204006ms","start":"2025-10-18T12:24:39.991724Z","end":"2025-10-18T12:24:40.138928Z","steps":["trace[460852249] 'process raft request' (duration: 145.946011ms)"],"step_count":1}
{"level":"info","ts":"2025-10-18T12:24:40.139208Z","caller":"traceutil/trace.go:172","msg":"trace[1691503075] transaction","detail":"{read_only:false; response_revision:368; number_of_response:1; }","duration":"130.816492ms","start":"2025-10-18T12:24:40.008382Z","end":"2025-10-18T12:24:40.139199Z","steps":["trace[1691503075] 'process raft request' (duration: 129.325269ms)"],"step_count":1}
{"level":"info","ts":"2025-10-18T12:24:40.144497Z","caller":"traceutil/trace.go:172","msg":"trace[842550493] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"135.72185ms","start":"2025-10-18T12:24:40.008758Z","end":"2025-10-18T12:24:40.144480Z","steps":["trace[842550493] 'process raft request' (duration: 128.981035ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-18T12:24:40.144822Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.354219ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" limit:1 ","response":"range_response_count:1 size:207"}
{"level":"info","ts":"2025-10-18T12:24:40.144866Z","caller":"traceutil/trace.go:172","msg":"trace[397740631] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:370; }","duration":"122.41407ms","start":"2025-10-18T12:24:40.022443Z","end":"2025-10-18T12:24:40.144857Z","steps":["trace[397740631] 'agreement among raft nodes before linearized reading' (duration: 122.2939ms)"],"step_count":1}
{"level":"info","ts":"2025-10-18T12:25:00.231361Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2025-10-18T12:25:00.231451Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"default-k8s-diff-port-948988","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.154:2380"],"advertise-client-urls":["https://192.168.61.154:2379"]}
{"level":"error","ts":"2025-10-18T12:25:00.231556Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-10-18T12:25:07.245321Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-10-18T12:25:07.249128Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-10-18T12:25:07.249192Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3cb84593c3b1392d","current-leader-member-id":"3cb84593c3b1392d"}
{"level":"info","ts":"2025-10-18T12:25:07.249489Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
{"level":"info","ts":"2025-10-18T12:25:07.249534Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
{"level":"warn","ts":"2025-10-18T12:25:07.252745Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-10-18T12:25:07.252848Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"error","ts":"2025-10-18T12:25:07.252863Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"warn","ts":"2025-10-18T12:25:07.253498Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.154:2379: use of closed network connection"}
{"level":"warn","ts":"2025-10-18T12:25:07.253553Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.154:2379: use of closed network connection"}
{"level":"error","ts":"2025-10-18T12:25:07.253569Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.61.154:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-10-18T12:25:07.256384Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.61.154:2380"}
{"level":"error","ts":"2025-10-18T12:25:07.256475Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.61.154:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-10-18T12:25:07.256703Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.61.154:2380"}
{"level":"info","ts":"2025-10-18T12:25:07.256718Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"default-k8s-diff-port-948988","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.154:2380"],"advertise-client-urls":["https://192.168.61.154:2379"]}
==> kernel <==
12:26:51 up 1 min, 0 users, load average: 2.58, 0.76, 0.27
Linux default-k8s-diff-port-948988 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [482f645840fb] <==
E1018 12:25:43.880029 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I1018 12:25:43.880149 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1018 12:25:43.881283 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1018 12:25:44.600365 1 controller.go:667] quota admission added evaluator for: deployments.apps
I1018 12:25:44.665650 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I1018 12:25:44.707914 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1018 12:25:44.717555 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1018 12:25:46.458993 1 controller.go:667] quota admission added evaluator for: endpoints
I1018 12:25:46.554520 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1018 12:25:46.699128 1 controller.go:667] quota admission added evaluator for: replicasets.apps
I1018 12:25:47.509491 1 controller.go:667] quota admission added evaluator for: namespaces
I1018 12:25:47.794476 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.100.186"}
I1018 12:25:47.820795 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.78.66"}
W1018 12:26:47.665841 1 handler_proxy.go:99] no RequestInfo found in the context
E1018 12:26:47.666026 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I1018 12:26:47.666042 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W1018 12:26:47.681677 1 handler_proxy.go:99] no RequestInfo found in the context
E1018 12:26:47.681971 1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
I1018 12:26:47.682341 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
==> kube-apiserver [5dfc625534d2] <==
W1018 12:25:09.464721 1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.517443 1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.620363 1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.693884 1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.721047 1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.726611 1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.759371 1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.795061 1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.819207 1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.841071 1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.864445 1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.896679 1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.930411 1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.971423 1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:10.017882 1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:10.045148 1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:10.067233 1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:10.127112 1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:10.133877 1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:10.157359 1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:10.165740 1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:10.173381 1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:10.191257 1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:10.254823 1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:10.300085 1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
==> kube-controller-manager [07dc691cd2b4] <==
I1018 12:24:37.212816 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
I1018 12:24:37.213552 1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
I1018 12:24:37.214863 1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
I1018 12:24:37.215195 1 shared_informer.go:356] "Caches are synced" controller="service account"
I1018 12:24:37.215506 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
I1018 12:24:37.215712 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
I1018 12:24:37.215992 1 shared_informer.go:356] "Caches are synced" controller="PV protection"
I1018 12:24:37.216210 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
I1018 12:24:37.216297 1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
I1018 12:24:37.220772 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1018 12:24:37.221277 1 shared_informer.go:356] "Caches are synced" controller="disruption"
I1018 12:24:37.229865 1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-948988" podCIDRs=["10.244.0.0/24"]
I1018 12:24:37.230483 1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
I1018 12:24:37.235336 1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
I1018 12:24:37.236208 1 shared_informer.go:356] "Caches are synced" controller="deployment"
I1018 12:24:37.243773 1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
I1018 12:24:37.261496 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
I1018 12:24:37.262756 1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
I1018 12:24:37.263515 1 shared_informer.go:356] "Caches are synced" controller="stateful set"
I1018 12:24:37.263680 1 shared_informer.go:356] "Caches are synced" controller="attach detach"
I1018 12:24:37.332884 1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
I1018 12:24:37.408817 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1018 12:24:37.409172 1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
I1018 12:24:37.409412 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
I1018 12:24:37.433850 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
==> kube-controller-manager [cbcb65b91df5] <==
I1018 12:25:46.326514 1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
I1018 12:25:46.330568 1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
I1018 12:25:46.338200 1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
I1018 12:25:46.354827 1 shared_informer.go:356] "Caches are synced" controller="disruption"
I1018 12:25:46.354933 1 shared_informer.go:356] "Caches are synced" controller="attach detach"
I1018 12:25:46.358135 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1018 12:25:46.358166 1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
I1018 12:25:46.358174 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
I1018 12:25:46.361699 1 shared_informer.go:356] "Caches are synced" controller="taint"
I1018 12:25:46.362331 1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
I1018 12:25:46.362518 1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-948988"
I1018 12:25:46.362582 1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
I1018 12:25:46.362715 1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
I1018 12:25:46.364998 1 shared_informer.go:356] "Caches are synced" controller="GC"
I1018 12:25:46.397419 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1018 12:25:47.622164 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1018 12:25:47.637442 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1018 12:25:47.640602 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1018 12:25:47.654283 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1018 12:25:47.654837 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1018 12:25:47.670862 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1018 12:25:47.673502 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I1018 12:25:56.364778 1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
E1018 12:26:47.748771 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I1018 12:26:47.764048 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
==> kube-proxy [72d0dd1b3e6d] <==
I1018 12:24:41.564008 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1018 12:24:41.664708 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1018 12:24:41.664884 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.61.154"]
E1018 12:24:41.665067 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1018 12:24:41.766806 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1018 12:24:41.766902 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1018 12:24:41.767037 1 server_linux.go:132] "Using iptables Proxier"
I1018 12:24:41.808707 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1018 12:24:41.810126 1 server.go:527] "Version info" version="v1.34.1"
I1018 12:24:41.810170 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1018 12:24:41.819567 1 config.go:200] "Starting service config controller"
I1018 12:24:41.819614 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1018 12:24:41.819656 1 config.go:106] "Starting endpoint slice config controller"
I1018 12:24:41.819662 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1018 12:24:41.819679 1 config.go:403] "Starting serviceCIDR config controller"
I1018 12:24:41.819685 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1018 12:24:41.834904 1 config.go:309] "Starting node config controller"
I1018 12:24:41.835028 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1018 12:24:41.835056 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1018 12:24:41.927064 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1018 12:24:41.927258 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1018 12:24:41.927530 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-proxy [e74b601e6b20] <==
I1018 12:25:45.811654 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1018 12:25:45.913019 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1018 12:25:45.913130 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.61.154"]
E1018 12:25:45.913538 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1018 12:25:46.627631 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1018 12:25:46.627729 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1018 12:25:46.627769 1 server_linux.go:132] "Using iptables Proxier"
I1018 12:25:46.729383 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1018 12:25:46.742257 1 server.go:527] "Version info" version="v1.34.1"
I1018 12:25:46.742299 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1018 12:25:46.769189 1 config.go:309] "Starting node config controller"
I1018 12:25:46.769207 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1018 12:25:46.769215 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1018 12:25:46.772876 1 config.go:403] "Starting serviceCIDR config controller"
I1018 12:25:46.772985 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1018 12:25:46.773282 1 config.go:200] "Starting service config controller"
I1018 12:25:46.773361 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1018 12:25:46.773393 1 config.go:106] "Starting endpoint slice config controller"
I1018 12:25:46.773398 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1018 12:25:46.874997 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1018 12:25:46.875472 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1018 12:25:46.875491 1 shared_informer.go:356] "Caches are synced" controller="service config"
==> kube-scheduler [aa45133c5292] <==
I1018 12:25:40.892121 1 serving.go:386] Generated self-signed cert in-memory
W1018 12:25:42.779818 1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1018 12:25:42.779913 1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1018 12:25:42.779937 1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
W1018 12:25:42.779952 1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1018 12:25:42.837530 1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
I1018 12:25:42.837672 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1018 12:25:42.850332 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1018 12:25:42.850953 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I1018 12:25:42.851127 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1018 12:25:42.851921 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I1018 12:25:42.953076 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kube-scheduler [ac171ed99aa7] <==
E1018 12:24:29.521551 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1018 12:24:29.521602 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1018 12:24:29.521714 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1018 12:24:29.521771 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1018 12:24:29.521820 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1018 12:24:30.388364 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1018 12:24:30.423548 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1018 12:24:30.458398 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1018 12:24:30.471430 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1018 12:24:30.482651 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1018 12:24:30.502659 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1018 12:24:30.602254 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1018 12:24:30.613712 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1018 12:24:30.623631 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1018 12:24:30.752533 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1018 12:24:30.774425 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1018 12:24:30.882034 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1018 12:24:30.922203 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
I1018 12:24:32.510730 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1018 12:25:00.227081 1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
I1018 12:25:00.227204 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1018 12:25:00.227889 1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
I1018 12:25:00.228116 1 server.go:263] "[graceful-termination] secure server has stopped listening"
I1018 12:25:00.228207 1 server.go:265] "[graceful-termination] secure server is exiting"
E1018 12:25:00.228229 1 run.go:72] "command failed" err="finished without leader elect"
==> kubelet <==
Oct 18 12:26:48 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:48.808146 4182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-default-k8s-diff-port-948988"
Oct 18 12:26:48 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:48.818965 4182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-948988\" already exists" pod="kube-system/etcd-default-k8s-diff-port-948988"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.224325 4182 apiserver.go:52] "Watching apiserver"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.299725 4182 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.334900 4182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a2da4bd7-fb36-44bc-9e08-4ccbe934a19a-tmp\") pod \"storage-provisioner\" (UID: \"a2da4bd7-fb36-44bc-9e08-4ccbe934a19a\") " pod="kube-system/storage-provisioner"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.335035 4182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6dd74255-86cf-46b6-a050-2d1ec343837e-xtables-lock\") pod \"kube-proxy-hmf6q\" (UID: \"6dd74255-86cf-46b6-a050-2d1ec343837e\") " pod="kube-system/kube-proxy-hmf6q"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.335064 4182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6dd74255-86cf-46b6-a050-2d1ec343837e-lib-modules\") pod \"kube-proxy-hmf6q\" (UID: \"6dd74255-86cf-46b6-a050-2d1ec343837e\") " pod="kube-system/kube-proxy-hmf6q"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.559117 4182 scope.go:117] "RemoveContainer" containerID="28ffefdfcaefaa0dcc5a6077bf470cdb9475d6e21b7a7d96be86de74a8777734"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:49.584832 4182 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:49.584904 4182 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:49.585150 4182 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-7788d_kube-system(482bf974-0dde-4e8e-abde-4c6a50f08c8d): ErrImagePull: Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" logger="UnhandledError"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:49.585190 4182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-7788d" podUID="482bf974-0dde-4e8e-abde-4c6a50f08c8d"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.834067 4182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-default-k8s-diff-port-948988"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.834883 4182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-default-k8s-diff-port-948988"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.835048 4182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-default-k8s-diff-port-948988"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.835180 4182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-default-k8s-diff-port-948988"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.835659 4182 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26d8ce1671b6d868f5c427741052d8ba6bc2581e713fc06671798cbeaa0e2467"
Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.457040 4182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-default-k8s-diff-port-948988\" already exists" pod="kube-system/kube-scheduler-default-k8s-diff-port-948988"
Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.473284 4182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-948988\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-948988"
Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.474210 4182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-default-k8s-diff-port-948988\" already exists" pod="kube-system/kube-controller-manager-default-k8s-diff-port-948988"
Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.475377 4182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-948988\" already exists" pod="kube-system/etcd-default-k8s-diff-port-948988"
Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.587059 4182 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.587186 4182 kuberuntime_image.go:43] "Failed to pull image" err="Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.587563 4182 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-6ffb444bf9-gxs6s_kubernetes-dashboard(d9f0a621-1105-44d9-97ff-6ab18a09af31): ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" logger="UnhandledError"
Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.587744 4182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gxs6s" podUID="d9f0a621-1105-44d9-97ff-6ab18a09af31"
==> kubernetes-dashboard [3a2c1a468e77] <==
2025/10/18 12:26:02 Starting overwatch
2025/10/18 12:26:02 Using namespace: kubernetes-dashboard
2025/10/18 12:26:02 Using in-cluster config to connect to apiserver
2025/10/18 12:26:02 Using secret token for csrf signing
2025/10/18 12:26:02 Initializing csrf token from kubernetes-dashboard-csrf secret
2025/10/18 12:26:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2025/10/18 12:26:02 Successful initial request to the apiserver, version: v1.34.1
2025/10/18 12:26:02 Generating JWE encryption key
2025/10/18 12:26:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2025/10/18 12:26:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2025/10/18 12:26:02 Initializing JWE encryption key from synchronized object
2025/10/18 12:26:02 Creating in-cluster Sidecar client
2025/10/18 12:26:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/10/18 12:26:02 Serving insecurely on HTTP port: 9090
2025/10/18 12:26:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
==> storage-provisioner [28ffefdfcaef] <==
I1018 12:25:44.727571 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1018 12:26:14.742942 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988
helpers_test.go:269: (dbg) Run: kubectl --context default-k8s-diff-port-948988 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-7788d dashboard-metrics-scraper-6ffb444bf9-gxs6s
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context default-k8s-diff-port-948988 describe pod metrics-server-746fcd58dc-7788d dashboard-metrics-scraper-6ffb444bf9-gxs6s
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-948988 describe pod metrics-server-746fcd58dc-7788d dashboard-metrics-scraper-6ffb444bf9-gxs6s: exit status 1 (88.809101ms)
** stderr **
Error from server (NotFound): pods "metrics-server-746fcd58dc-7788d" not found
Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-gxs6s" not found
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-948988 describe pod metrics-server-746fcd58dc-7788d dashboard-metrics-scraper-6ffb444bf9-gxs6s: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p default-k8s-diff-port-948988 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-948988 logs -n 25: (1.453478465s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs:
-- stdout --
==> Audit <==
┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────
────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────
────────┤
│ stop │ -p default-k8s-diff-port-948988 --alsologtostderr -v=3 │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:24 UTC │ 18 Oct 25 12:25 UTC │
│ addons │ enable metrics-server -p embed-certs-270191 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ embed-certs-270191 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ addons │ enable metrics-server -p newest-cni-661287 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ newest-cni-661287 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ stop │ -p embed-certs-270191 --alsologtostderr -v=3 │ embed-certs-270191 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ stop │ -p newest-cni-661287 --alsologtostderr -v=3 │ newest-cni-661287 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ addons │ enable dashboard -p default-k8s-diff-port-948988 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ start │ -p default-k8s-diff-port-948988 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2 --auto-update-drivers=false --kubernetes-version=v1.34.1 │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ start │ -p embed-certs-270191 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2 --auto-update-drivers=false --kubernetes-version=v1.34.1 │ embed-certs-270191 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:26 UTC │
│ addons │ enable dashboard -p newest-cni-661287 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ newest-cni-661287 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ start │ -p newest-cni-661287 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2 --auto-update-drivers=false --kubernetes-version=v1.34.1 │ newest-cni-661287 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ │
│ image │ no-preload-839073 image list --format=json │ no-preload-839073 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ pause │ -p no-preload-839073 --alsologtostderr -v=1 │ no-preload-839073 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ unpause │ -p no-preload-839073 --alsologtostderr -v=1 │ no-preload-839073 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ delete │ -p no-preload-839073 │ no-preload-839073 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ delete │ -p no-preload-839073 │ no-preload-839073 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ 18 Oct 25 12:25 UTC │
│ start │ -p auto-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 --auto-update-drivers=false │ auto-720125 │ jenkins │ v1.37.0 │ 18 Oct 25 12:25 UTC │ │
│ image │ default-k8s-diff-port-948988 image list --format=json │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
│ pause │ -p default-k8s-diff-port-948988 --alsologtostderr -v=1 │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
│ image │ embed-certs-270191 image list --format=json │ embed-certs-270191 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
│ pause │ -p embed-certs-270191 --alsologtostderr -v=1 │ embed-certs-270191 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
│ unpause │ -p embed-certs-270191 --alsologtostderr -v=1 │ embed-certs-270191 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
│ delete │ -p embed-certs-270191 │ embed-certs-270191 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
│ delete │ -p embed-certs-270191 │ embed-certs-270191 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
│ start │ -p kindnet-720125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 --auto-update-drivers=false │ kindnet-720125 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ │
│ unpause │ -p default-k8s-diff-port-948988 --alsologtostderr -v=1 │ default-k8s-diff-port-948988 │ jenkins │ v1.37.0 │ 18 Oct 25 12:26 UTC │ 18 Oct 25 12:26 UTC │
└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────
────────┘
==> Last Start <==
Log file created at: 2025/10/18 12:26:39
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.24.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1018 12:26:39.638929 54024 out.go:360] Setting OutFile to fd 1 ...
I1018 12:26:39.639215 54024 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:26:39.639226 54024 out.go:374] Setting ErrFile to fd 2...
I1018 12:26:39.639232 54024 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:26:39.639463 54024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-6010/.minikube/bin
I1018 12:26:39.639986 54024 out.go:368] Setting JSON to false
I1018 12:26:39.640948 54024 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4147,"bootTime":1760786253,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1018 12:26:39.641036 54024 start.go:141] virtualization: kvm guest
I1018 12:26:39.642912 54024 out.go:179] * [kindnet-720125] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1018 12:26:39.644319 54024 notify.go:220] Checking for updates...
I1018 12:26:39.644359 54024 out.go:179] - MINIKUBE_LOCATION=21647
I1018 12:26:39.645575 54024 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1018 12:26:39.646808 54024 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21647-6010/kubeconfig
I1018 12:26:39.647991 54024 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-6010/.minikube
I1018 12:26:39.649134 54024 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1018 12:26:39.650480 54024 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1018 12:26:39.652192 54024 config.go:182] Loaded profile config "auto-720125": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 12:26:39.652340 54024 config.go:182] Loaded profile config "default-k8s-diff-port-948988": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 12:26:39.652479 54024 config.go:182] Loaded profile config "newest-cni-661287": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1018 12:26:39.652597 54024 driver.go:421] Setting default libvirt URI to qemu:///system
I1018 12:26:39.691700 54024 out.go:179] * Using the kvm2 driver based on user configuration
I1018 12:26:39.692905 54024 start.go:305] selected driver: kvm2
I1018 12:26:39.692920 54024 start.go:925] validating driver "kvm2" against <nil>
I1018 12:26:39.692931 54024 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1018 12:26:39.693690 54024 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1018 12:26:39.693776 54024 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6010/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1018 12:26:39.709001 54024 install.go:163] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.37.0
I1018 12:26:39.709030 54024 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21647-6010/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1018 12:26:39.724060 54024 install.go:163] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.37.0
I1018 12:26:39.724111 54024 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1018 12:26:39.724397 54024 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1018 12:26:39.724424 54024 cni.go:84] Creating CNI manager for "kindnet"
I1018 12:26:39.724429 54024 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1018 12:26:39.724476 54024 start.go:349] cluster config:
{Name:kindnet-720125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-720125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
I1018 12:26:39.724562 54024 iso.go:125] acquiring lock: {Name:mk7b9977f44c882a06d0a932f05bd4c8e4cea871 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1018 12:26:39.726635 54024 out.go:179] * Starting "kindnet-720125" primary control-plane node in "kindnet-720125" cluster
I1018 12:26:39.727995 54024 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
I1018 12:26:39.728049 54024 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-6010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
I1018 12:26:39.728060 54024 cache.go:58] Caching tarball of preloaded images
I1018 12:26:39.728181 54024 preload.go:233] Found /home/jenkins/minikube-integration/21647-6010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1018 12:26:39.728194 54024 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
I1018 12:26:39.728350 54024 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/kindnet-720125/config.json ...
I1018 12:26:39.728376 54024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/kindnet-720125/config.json: {Name:mkf1b74ab9b12d679411e2c6e2e2149cae3e0078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 12:26:39.728580 54024 start.go:360] acquireMachinesLock for kindnet-720125: {Name:mk547bbf69b426adc37163c0f135f5803e3e7ae0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1018 12:26:39.728617 54024 start.go:364] duration metric: took 19.75µs to acquireMachinesLock for "kindnet-720125"
I1018 12:26:39.728642 54024 start.go:93] Provisioning new machine with config: &{Name:kindnet-720125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.34.1 ClusterName:kindnet-720125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I1018 12:26:39.728718 54024 start.go:125] createHost starting for "" (driver="kvm2")
I1018 12:26:35.461906 52813 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.481663654s)
I1018 12:26:35.461943 52813 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1018 12:26:35.505542 52813 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I1018 12:26:35.519942 52813 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
I1018 12:26:35.544751 52813 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
I1018 12:26:35.561575 52813 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1018 12:26:35.715918 52813 ssh_runner.go:195] Run: sudo systemctl restart docker
I1018 12:26:38.056356 52813 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.34040401s)
I1018 12:26:38.056485 52813 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1018 12:26:38.085796 52813 docker.go:691] Got preloaded images: -- stdout --
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/pause:3.10.1
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1018 12:26:38.085832 52813 cache_images.go:85] Images are preloaded, skipping loading
I1018 12:26:38.085846 52813 kubeadm.go:934] updating node { 192.168.72.13 8443 v1.34.1 docker true true} ...
I1018 12:26:38.085985 52813 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-720125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.13
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:auto-720125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1018 12:26:38.086071 52813 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1018 12:26:38.149565 52813 cni.go:84] Creating CNI manager for ""
I1018 12:26:38.149605 52813 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1018 12:26:38.149622 52813 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1018 12:26:38.149639 52813 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.13 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-720125 NodeName:auto-720125 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1018 12:26:38.149863 52813 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.72.13
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "auto-720125"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.72.13"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.72.13"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1018 12:26:38.149950 52813 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1018 12:26:38.167666 52813 binaries.go:44] Found k8s binaries, skipping transfer
I1018 12:26:38.167750 52813 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1018 12:26:38.182469 52813 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
I1018 12:26:38.210498 52813 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1018 12:26:38.235674 52813 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
I1018 12:26:38.272656 52813 ssh_runner.go:195] Run: grep 192.168.72.13 control-plane.minikube.internal$ /etc/hosts
I1018 12:26:38.278428 52813 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.13 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1018 12:26:38.295186 52813 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1018 12:26:38.477493 52813 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1018 12:26:38.516693 52813 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125 for IP: 192.168.72.13
I1018 12:26:38.516721 52813 certs.go:195] generating shared ca certs ...
I1018 12:26:38.516742 52813 certs.go:227] acquiring lock for ca certs: {Name:mk4e9b668d7f4a08d373c26a5a5beadd4b363eae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 12:26:38.516897 52813 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-6010/.minikube/ca.key
I1018 12:26:38.516956 52813 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-6010/.minikube/proxy-client-ca.key
I1018 12:26:38.516971 52813 certs.go:257] generating profile certs ...
I1018 12:26:38.517059 52813 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.key
I1018 12:26:38.517080 52813 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.crt with IP's: []
I1018 12:26:38.795006 52813 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.crt ...
I1018 12:26:38.795041 52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.crt: {Name:mke50b87cc8afab1bea24439b2b8f8b4fce785c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 12:26:38.795221 52813 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.key ...
I1018 12:26:38.795236 52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/client.key: {Name:mk73a13799ed8cba8c6cf5586dd849d9aa3376fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 12:26:38.795369 52813 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key.5f192319
I1018 12:26:38.795387 52813 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt.5f192319 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.13]
I1018 12:26:39.015985 52813 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt.5f192319 ...
I1018 12:26:39.016017 52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt.5f192319: {Name:mk48dc89d0bc936861c01af4faa11afa9b99fc7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 12:26:39.016173 52813 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key.5f192319 ...
I1018 12:26:39.016187 52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key.5f192319: {Name:mk06903a8537a759ab5885d9e1ce94cdbffcbf0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 12:26:39.016265 52813 certs.go:382] copying /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt.5f192319 -> /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt
I1018 12:26:39.016371 52813 certs.go:386] copying /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key.5f192319 -> /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key
I1018 12:26:39.016432 52813 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.key
I1018 12:26:39.016447 52813 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.crt with IP's: []
I1018 12:26:39.194387 52813 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.crt ...
I1018 12:26:39.194419 52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.crt: {Name:mk9243a20439ab9292d13a3cab98b56367a296c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 12:26:39.194631 52813 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.key ...
I1018 12:26:39.194649 52813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.key: {Name:mk548ef445e4b58857c8694e04881f9da155116e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1018 12:26:39.194883 52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/certs/9909.pem (1338 bytes)
W1018 12:26:39.194965 52813 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-6010/.minikube/certs/9909_empty.pem, impossibly tiny 0 bytes
I1018 12:26:39.194982 52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/certs/ca-key.pem (1679 bytes)
I1018 12:26:39.195016 52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/certs/ca.pem (1082 bytes)
I1018 12:26:39.195051 52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/certs/cert.pem (1123 bytes)
I1018 12:26:39.195083 52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/certs/key.pem (1679 bytes)
I1018 12:26:39.195138 52813 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-6010/.minikube/files/etc/ssl/certs/99092.pem (1708 bytes)
I1018 12:26:39.195753 52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1018 12:26:39.237771 52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1018 12:26:39.273475 52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1018 12:26:39.304754 52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1018 12:26:39.340590 52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
I1018 12:26:39.375528 52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1018 12:26:39.408845 52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1018 12:26:39.442920 52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/profiles/auto-720125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1018 12:26:39.481085 52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1018 12:26:39.516586 52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/certs/9909.pem --> /usr/share/ca-certificates/9909.pem (1338 bytes)
I1018 12:26:39.554538 52813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-6010/.minikube/files/etc/ssl/certs/99092.pem --> /usr/share/ca-certificates/99092.pem (1708 bytes)
I1018 12:26:39.594522 52813 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1018 12:26:39.619184 52813 ssh_runner.go:195] Run: openssl version
I1018 12:26:39.626356 52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1018 12:26:39.640801 52813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1018 12:26:39.646535 52813 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
I1018 12:26:39.646588 52813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1018 12:26:39.654893 52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1018 12:26:39.669539 52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9909.pem && ln -fs /usr/share/ca-certificates/9909.pem /etc/ssl/certs/9909.pem"
I1018 12:26:39.684162 52813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9909.pem
I1018 12:26:39.689731 52813 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 11:35 /usr/share/ca-certificates/9909.pem
I1018 12:26:39.689790 52813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9909.pem
I1018 12:26:39.697600 52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9909.pem /etc/ssl/certs/51391683.0"
I1018 12:26:39.714166 52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99092.pem && ln -fs /usr/share/ca-certificates/99092.pem /etc/ssl/certs/99092.pem"
I1018 12:26:39.729837 52813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99092.pem
I1018 12:26:39.735419 52813 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 11:35 /usr/share/ca-certificates/99092.pem
I1018 12:26:39.735488 52813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99092.pem
I1018 12:26:39.743203 52813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99092.pem /etc/ssl/certs/3ec20f2e.0"
I1018 12:26:39.758932 52813 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1018 12:26:39.765101 52813 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1018 12:26:39.765169 52813 kubeadm.go:400] StartCluster: {Name:auto-720125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clu
sterName:auto-720125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.13 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1018 12:26:39.765332 52813 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1018 12:26:39.785247 52813 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1018 12:26:39.798374 52813 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1018 12:26:39.810946 52813 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1018 12:26:39.825029 52813 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1018 12:26:39.825056 52813 kubeadm.go:157] found existing configuration files:
I1018 12:26:39.825096 52813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1018 12:26:39.836919 52813 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1018 12:26:39.836997 52813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1018 12:26:39.849872 52813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1018 12:26:39.861692 52813 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1018 12:26:39.861767 52813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1018 12:26:39.877485 52813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1018 12:26:39.890697 52813 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1018 12:26:39.890777 52813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1018 12:26:39.906568 52813 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1018 12:26:39.920626 52813 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1018 12:26:39.920740 52813 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1018 12:26:39.936398 52813 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1018 12:26:39.998219 52813 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
I1018 12:26:39.998340 52813 kubeadm.go:318] [preflight] Running pre-flight checks
I1018 12:26:40.111469 52813 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
I1018 12:26:40.111618 52813 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1018 12:26:40.111795 52813 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1018 12:26:40.128525 52813 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1018 12:26:40.130607 52813 out.go:252] - Generating certificates and keys ...
I1018 12:26:40.130710 52813 kubeadm.go:318] [certs] Using existing ca certificate authority
I1018 12:26:40.130803 52813 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
I1018 12:26:40.350726 52813 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
I1018 12:26:40.455768 52813 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
I1018 12:26:40.598243 52813 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
I1018 12:26:41.011504 52813 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
I1018 12:26:41.091757 52813 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
I1018 12:26:41.092141 52813 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-720125 localhost] and IPs [192.168.72.13 127.0.0.1 ::1]
I1018 12:26:41.376370 52813 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
I1018 12:26:41.376756 52813 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-720125 localhost] and IPs [192.168.72.13 127.0.0.1 ::1]
I1018 12:26:41.679155 52813 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
I1018 12:26:41.832796 52813 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
I1018 12:26:42.091476 52813 kubeadm.go:318] [certs] Generating "sa" key and public key
I1018 12:26:42.091617 52813 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1018 12:26:42.555206 52813 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
I1018 12:26:42.822944 52813 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1018 12:26:43.272107 52813 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1018 12:26:43.527688 52813 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1018 12:26:43.769537 52813 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1018 12:26:43.770332 52813 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1018 12:26:43.773363 52813 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1018 12:26:39.521607 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": read tcp 192.168.39.1:35984->192.168.39.140:8443: read: connection reset by peer
I1018 12:26:39.521660 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:39.522161 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:39.940469 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:39.941178 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:40.440329 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:40.441012 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:40.940495 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:40.941051 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:41.440547 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:41.441243 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:41.939828 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:41.940532 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:42.440175 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:42.440815 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:42.940483 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:42.941097 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:43.439852 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:43.440639 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:43.940431 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:43.941130 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:39.730484 54024 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
I1018 12:26:39.730631 54024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1018 12:26:39.730675 54024 main.go:141] libmachine: Launching plugin server for driver kvm2
I1018 12:26:39.746220 54024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38367
I1018 12:26:39.746691 54024 main.go:141] libmachine: () Calling .GetVersion
I1018 12:26:39.747252 54024 main.go:141] libmachine: Using API Version 1
I1018 12:26:39.747278 54024 main.go:141] libmachine: () Calling .SetConfigRaw
I1018 12:26:39.747712 54024 main.go:141] libmachine: () Calling .GetMachineName
I1018 12:26:39.747910 54024 main.go:141] libmachine: (kindnet-720125) Calling .GetMachineName
I1018 12:26:39.748157 54024 main.go:141] libmachine: (kindnet-720125) Calling .DriverName
I1018 12:26:39.748327 54024 start.go:159] libmachine.API.Create for "kindnet-720125" (driver="kvm2")
I1018 12:26:39.748358 54024 client.go:168] LocalClient.Create starting
I1018 12:26:39.748391 54024 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-6010/.minikube/certs/ca.pem
I1018 12:26:39.748425 54024 main.go:141] libmachine: Decoding PEM data...
I1018 12:26:39.748441 54024 main.go:141] libmachine: Parsing certificate...
I1018 12:26:39.748493 54024 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-6010/.minikube/certs/cert.pem
I1018 12:26:39.748514 54024 main.go:141] libmachine: Decoding PEM data...
I1018 12:26:39.748527 54024 main.go:141] libmachine: Parsing certificate...
I1018 12:26:39.748542 54024 main.go:141] libmachine: Running pre-create checks...
I1018 12:26:39.748555 54024 main.go:141] libmachine: (kindnet-720125) Calling .PreCreateCheck
I1018 12:26:39.748883 54024 main.go:141] libmachine: (kindnet-720125) Calling .GetConfigRaw
I1018 12:26:39.749274 54024 main.go:141] libmachine: Creating machine...
I1018 12:26:39.749304 54024 main.go:141] libmachine: (kindnet-720125) Calling .Create
I1018 12:26:39.749445 54024 main.go:141] libmachine: (kindnet-720125) creating domain...
I1018 12:26:39.749466 54024 main.go:141] libmachine: (kindnet-720125) creating network...
I1018 12:26:39.750975 54024 main.go:141] libmachine: (kindnet-720125) DBG | found existing default network
I1018 12:26:39.751279 54024 main.go:141] libmachine: (kindnet-720125) DBG | <network connections='3'>
I1018 12:26:39.751320 54024 main.go:141] libmachine: (kindnet-720125) DBG | <name>default</name>
I1018 12:26:39.751345 54024 main.go:141] libmachine: (kindnet-720125) DBG | <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
I1018 12:26:39.751362 54024 main.go:141] libmachine: (kindnet-720125) DBG | <forward mode='nat'>
I1018 12:26:39.751384 54024 main.go:141] libmachine: (kindnet-720125) DBG | <nat>
I1018 12:26:39.751398 54024 main.go:141] libmachine: (kindnet-720125) DBG | <port start='1024' end='65535'/>
I1018 12:26:39.751406 54024 main.go:141] libmachine: (kindnet-720125) DBG | </nat>
I1018 12:26:39.751412 54024 main.go:141] libmachine: (kindnet-720125) DBG | </forward>
I1018 12:26:39.751448 54024 main.go:141] libmachine: (kindnet-720125) DBG | <bridge name='virbr0' stp='on' delay='0'/>
I1018 12:26:39.751488 54024 main.go:141] libmachine: (kindnet-720125) DBG | <mac address='52:54:00:10:a2:1d'/>
I1018 12:26:39.751506 54024 main.go:141] libmachine: (kindnet-720125) DBG | <ip address='192.168.122.1' netmask='255.255.255.0'>
I1018 12:26:39.751517 54024 main.go:141] libmachine: (kindnet-720125) DBG | <dhcp>
I1018 12:26:39.751527 54024 main.go:141] libmachine: (kindnet-720125) DBG | <range start='192.168.122.2' end='192.168.122.254'/>
I1018 12:26:39.751535 54024 main.go:141] libmachine: (kindnet-720125) DBG | </dhcp>
I1018 12:26:39.751543 54024 main.go:141] libmachine: (kindnet-720125) DBG | </ip>
I1018 12:26:39.751557 54024 main.go:141] libmachine: (kindnet-720125) DBG | </network>
I1018 12:26:39.751576 54024 main.go:141] libmachine: (kindnet-720125) DBG |
I1018 12:26:39.752366 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:39.752168 54053 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:82:24:f4} reservation:<nil>}
I1018 12:26:39.753108 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:39.753033 54053 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000260370}
I1018 12:26:39.753127 54024 main.go:141] libmachine: (kindnet-720125) DBG | defining private network:
I1018 12:26:39.753137 54024 main.go:141] libmachine: (kindnet-720125) DBG |
I1018 12:26:39.753143 54024 main.go:141] libmachine: (kindnet-720125) DBG | <network>
I1018 12:26:39.753152 54024 main.go:141] libmachine: (kindnet-720125) DBG | <name>mk-kindnet-720125</name>
I1018 12:26:39.753159 54024 main.go:141] libmachine: (kindnet-720125) DBG | <dns enable='no'/>
I1018 12:26:39.753168 54024 main.go:141] libmachine: (kindnet-720125) DBG | <ip address='192.168.50.1' netmask='255.255.255.0'>
I1018 12:26:39.753175 54024 main.go:141] libmachine: (kindnet-720125) DBG | <dhcp>
I1018 12:26:39.753184 54024 main.go:141] libmachine: (kindnet-720125) DBG | <range start='192.168.50.2' end='192.168.50.253'/>
I1018 12:26:39.753190 54024 main.go:141] libmachine: (kindnet-720125) DBG | </dhcp>
I1018 12:26:39.753213 54024 main.go:141] libmachine: (kindnet-720125) DBG | </ip>
I1018 12:26:39.753246 54024 main.go:141] libmachine: (kindnet-720125) DBG | </network>
I1018 12:26:39.753262 54024 main.go:141] libmachine: (kindnet-720125) DBG |
I1018 12:26:39.759190 54024 main.go:141] libmachine: (kindnet-720125) DBG | creating private network mk-kindnet-720125 192.168.50.0/24...
I1018 12:26:39.842530 54024 main.go:141] libmachine: (kindnet-720125) DBG | private network mk-kindnet-720125 192.168.50.0/24 created
I1018 12:26:39.842829 54024 main.go:141] libmachine: (kindnet-720125) DBG | <network>
I1018 12:26:39.842844 54024 main.go:141] libmachine: (kindnet-720125) DBG | <name>mk-kindnet-720125</name>
I1018 12:26:39.842855 54024 main.go:141] libmachine: (kindnet-720125) DBG | <uuid>57af09bd-510d-4d07-b5da-0d64b9c8c775</uuid>
I1018 12:26:39.842865 54024 main.go:141] libmachine: (kindnet-720125) setting up store path in /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125 ...
I1018 12:26:39.842873 54024 main.go:141] libmachine: (kindnet-720125) DBG | <bridge name='virbr2' stp='on' delay='0'/>
I1018 12:26:39.842883 54024 main.go:141] libmachine: (kindnet-720125) DBG | <mac address='52:54:00:4a:b8:f3'/>
I1018 12:26:39.842890 54024 main.go:141] libmachine: (kindnet-720125) DBG | <dns enable='no'/>
I1018 12:26:39.842900 54024 main.go:141] libmachine: (kindnet-720125) DBG | <ip address='192.168.50.1' netmask='255.255.255.0'>
I1018 12:26:39.842912 54024 main.go:141] libmachine: (kindnet-720125) DBG | <dhcp>
I1018 12:26:39.842920 54024 main.go:141] libmachine: (kindnet-720125) DBG | <range start='192.168.50.2' end='192.168.50.253'/>
I1018 12:26:39.842926 54024 main.go:141] libmachine: (kindnet-720125) DBG | </dhcp>
I1018 12:26:39.842937 54024 main.go:141] libmachine: (kindnet-720125) building disk image from file:///home/jenkins/minikube-integration/21647-6010/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
I1018 12:26:39.842947 54024 main.go:141] libmachine: (kindnet-720125) DBG | </ip>
I1018 12:26:39.842958 54024 main.go:141] libmachine: (kindnet-720125) DBG | </network>
I1018 12:26:39.842975 54024 main.go:141] libmachine: (kindnet-720125) Downloading /home/jenkins/minikube-integration/21647-6010/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21647-6010/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
I1018 12:26:39.842995 54024 main.go:141] libmachine: (kindnet-720125) DBG |
I1018 12:26:39.843018 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:39.842834 54053 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21647-6010/.minikube
I1018 12:26:40.099390 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:40.099247 54053 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/id_rsa...
I1018 12:26:40.381985 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:40.381830 54053 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/kindnet-720125.rawdisk...
I1018 12:26:40.382025 54024 main.go:141] libmachine: (kindnet-720125) DBG | Writing magic tar header
I1018 12:26:40.382039 54024 main.go:141] libmachine: (kindnet-720125) DBG | Writing SSH key tar header
I1018 12:26:40.382049 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:40.381994 54053 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125 ...
I1018 12:26:40.382145 54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125
I1018 12:26:40.382185 54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125 (perms=drwx------)
I1018 12:26:40.382204 54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins/minikube-integration/21647-6010/.minikube/machines (perms=drwxr-xr-x)
I1018 12:26:40.382225 54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6010/.minikube/machines
I1018 12:26:40.382245 54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6010/.minikube
I1018 12:26:40.382257 54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21647-6010
I1018 12:26:40.382268 54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins/minikube-integration
I1018 12:26:40.382278 54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home/jenkins
I1018 12:26:40.382302 54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins/minikube-integration/21647-6010/.minikube (perms=drwxr-xr-x)
I1018 12:26:40.382314 54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins/minikube-integration/21647-6010 (perms=drwxrwxr-x)
I1018 12:26:40.382334 54024 main.go:141] libmachine: (kindnet-720125) DBG | checking permissions on dir: /home
I1018 12:26:40.382345 54024 main.go:141] libmachine: (kindnet-720125) DBG | skipping /home - not owner
I1018 12:26:40.382356 54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1018 12:26:40.382367 54024 main.go:141] libmachine: (kindnet-720125) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1018 12:26:40.382376 54024 main.go:141] libmachine: (kindnet-720125) defining domain...
I1018 12:26:40.383798 54024 main.go:141] libmachine: (kindnet-720125) defining domain using XML:
I1018 12:26:40.383831 54024 main.go:141] libmachine: (kindnet-720125) <domain type='kvm'>
I1018 12:26:40.383842 54024 main.go:141] libmachine: (kindnet-720125) <name>kindnet-720125</name>
I1018 12:26:40.383853 54024 main.go:141] libmachine: (kindnet-720125) <memory unit='MiB'>3072</memory>
I1018 12:26:40.383858 54024 main.go:141] libmachine: (kindnet-720125) <vcpu>2</vcpu>
I1018 12:26:40.383862 54024 main.go:141] libmachine: (kindnet-720125) <features>
I1018 12:26:40.383867 54024 main.go:141] libmachine: (kindnet-720125) <acpi/>
I1018 12:26:40.383875 54024 main.go:141] libmachine: (kindnet-720125) <apic/>
I1018 12:26:40.383882 54024 main.go:141] libmachine: (kindnet-720125) <pae/>
I1018 12:26:40.383886 54024 main.go:141] libmachine: (kindnet-720125) </features>
I1018 12:26:40.383891 54024 main.go:141] libmachine: (kindnet-720125) <cpu mode='host-passthrough'>
I1018 12:26:40.383898 54024 main.go:141] libmachine: (kindnet-720125) </cpu>
I1018 12:26:40.383905 54024 main.go:141] libmachine: (kindnet-720125) <os>
I1018 12:26:40.383916 54024 main.go:141] libmachine: (kindnet-720125) <type>hvm</type>
I1018 12:26:40.383924 54024 main.go:141] libmachine: (kindnet-720125) <boot dev='cdrom'/>
I1018 12:26:40.383934 54024 main.go:141] libmachine: (kindnet-720125) <boot dev='hd'/>
I1018 12:26:40.383944 54024 main.go:141] libmachine: (kindnet-720125) <bootmenu enable='no'/>
I1018 12:26:40.383948 54024 main.go:141] libmachine: (kindnet-720125) </os>
I1018 12:26:40.383953 54024 main.go:141] libmachine: (kindnet-720125) <devices>
I1018 12:26:40.383957 54024 main.go:141] libmachine: (kindnet-720125) <disk type='file' device='cdrom'>
I1018 12:26:40.383997 54024 main.go:141] libmachine: (kindnet-720125) <source file='/home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/boot2docker.iso'/>
I1018 12:26:40.384023 54024 main.go:141] libmachine: (kindnet-720125) <target dev='hdc' bus='scsi'/>
I1018 12:26:40.384037 54024 main.go:141] libmachine: (kindnet-720125) <readonly/>
I1018 12:26:40.384051 54024 main.go:141] libmachine: (kindnet-720125) </disk>
I1018 12:26:40.384065 54024 main.go:141] libmachine: (kindnet-720125) <disk type='file' device='disk'>
I1018 12:26:40.384079 54024 main.go:141] libmachine: (kindnet-720125) <driver name='qemu' type='raw' cache='default' io='threads' />
I1018 12:26:40.384096 54024 main.go:141] libmachine: (kindnet-720125) <source file='/home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/kindnet-720125.rawdisk'/>
I1018 12:26:40.384108 54024 main.go:141] libmachine: (kindnet-720125) <target dev='hda' bus='virtio'/>
I1018 12:26:40.384119 54024 main.go:141] libmachine: (kindnet-720125) </disk>
I1018 12:26:40.384133 54024 main.go:141] libmachine: (kindnet-720125) <interface type='network'>
I1018 12:26:40.384146 54024 main.go:141] libmachine: (kindnet-720125) <source network='mk-kindnet-720125'/>
I1018 12:26:40.384157 54024 main.go:141] libmachine: (kindnet-720125) <model type='virtio'/>
I1018 12:26:40.384168 54024 main.go:141] libmachine: (kindnet-720125) </interface>
I1018 12:26:40.384179 54024 main.go:141] libmachine: (kindnet-720125) <interface type='network'>
I1018 12:26:40.384192 54024 main.go:141] libmachine: (kindnet-720125) <source network='default'/>
I1018 12:26:40.384202 54024 main.go:141] libmachine: (kindnet-720125) <model type='virtio'/>
I1018 12:26:40.384216 54024 main.go:141] libmachine: (kindnet-720125) </interface>
I1018 12:26:40.384230 54024 main.go:141] libmachine: (kindnet-720125) <serial type='pty'>
I1018 12:26:40.384236 54024 main.go:141] libmachine: (kindnet-720125) <target port='0'/>
I1018 12:26:40.384245 54024 main.go:141] libmachine: (kindnet-720125) </serial>
I1018 12:26:40.384254 54024 main.go:141] libmachine: (kindnet-720125) <console type='pty'>
I1018 12:26:40.384266 54024 main.go:141] libmachine: (kindnet-720125) <target type='serial' port='0'/>
I1018 12:26:40.384277 54024 main.go:141] libmachine: (kindnet-720125) </console>
I1018 12:26:40.384304 54024 main.go:141] libmachine: (kindnet-720125) <rng model='virtio'>
I1018 12:26:40.384323 54024 main.go:141] libmachine: (kindnet-720125) <backend model='random'>/dev/random</backend>
I1018 12:26:40.384332 54024 main.go:141] libmachine: (kindnet-720125) </rng>
I1018 12:26:40.384340 54024 main.go:141] libmachine: (kindnet-720125) </devices>
I1018 12:26:40.384354 54024 main.go:141] libmachine: (kindnet-720125) </domain>
I1018 12:26:40.384364 54024 main.go:141] libmachine: (kindnet-720125)
I1018 12:26:40.388970 54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:3f:a0:78 in network default
I1018 12:26:40.389652 54024 main.go:141] libmachine: (kindnet-720125) starting domain...
I1018 12:26:40.389680 54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
I1018 12:26:40.389688 54024 main.go:141] libmachine: (kindnet-720125) ensuring networks are active...
I1018 12:26:40.390420 54024 main.go:141] libmachine: (kindnet-720125) Ensuring network default is active
I1018 12:26:40.390825 54024 main.go:141] libmachine: (kindnet-720125) Ensuring network mk-kindnet-720125 is active
I1018 12:26:40.391737 54024 main.go:141] libmachine: (kindnet-720125) getting domain XML...
I1018 12:26:40.393514 54024 main.go:141] libmachine: (kindnet-720125) DBG | starting domain XML:
I1018 12:26:40.393530 54024 main.go:141] libmachine: (kindnet-720125) DBG | <domain type='kvm'>
I1018 12:26:40.393539 54024 main.go:141] libmachine: (kindnet-720125) DBG | <name>kindnet-720125</name>
I1018 12:26:40.393548 54024 main.go:141] libmachine: (kindnet-720125) DBG | <uuid>d3c666c7-5967-40a8-9b36-6cfb4dcc1fb1</uuid>
I1018 12:26:40.393556 54024 main.go:141] libmachine: (kindnet-720125) DBG | <memory unit='KiB'>3145728</memory>
I1018 12:26:40.393564 54024 main.go:141] libmachine: (kindnet-720125) DBG | <currentMemory unit='KiB'>3145728</currentMemory>
I1018 12:26:40.393573 54024 main.go:141] libmachine: (kindnet-720125) DBG | <vcpu placement='static'>2</vcpu>
I1018 12:26:40.393580 54024 main.go:141] libmachine: (kindnet-720125) DBG | <os>
I1018 12:26:40.393593 54024 main.go:141] libmachine: (kindnet-720125) DBG | <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
I1018 12:26:40.393629 54024 main.go:141] libmachine: (kindnet-720125) DBG | <boot dev='cdrom'/>
I1018 12:26:40.393654 54024 main.go:141] libmachine: (kindnet-720125) DBG | <boot dev='hd'/>
I1018 12:26:40.393666 54024 main.go:141] libmachine: (kindnet-720125) DBG | <bootmenu enable='no'/>
I1018 12:26:40.393675 54024 main.go:141] libmachine: (kindnet-720125) DBG | </os>
I1018 12:26:40.393682 54024 main.go:141] libmachine: (kindnet-720125) DBG | <features>
I1018 12:26:40.393690 54024 main.go:141] libmachine: (kindnet-720125) DBG | <acpi/>
I1018 12:26:40.393698 54024 main.go:141] libmachine: (kindnet-720125) DBG | <apic/>
I1018 12:26:40.393707 54024 main.go:141] libmachine: (kindnet-720125) DBG | <pae/>
I1018 12:26:40.393717 54024 main.go:141] libmachine: (kindnet-720125) DBG | </features>
I1018 12:26:40.393726 54024 main.go:141] libmachine: (kindnet-720125) DBG | <cpu mode='host-passthrough' check='none' migratable='on'/>
I1018 12:26:40.393736 54024 main.go:141] libmachine: (kindnet-720125) DBG | <clock offset='utc'/>
I1018 12:26:40.393745 54024 main.go:141] libmachine: (kindnet-720125) DBG | <on_poweroff>destroy</on_poweroff>
I1018 12:26:40.393755 54024 main.go:141] libmachine: (kindnet-720125) DBG | <on_reboot>restart</on_reboot>
I1018 12:26:40.393764 54024 main.go:141] libmachine: (kindnet-720125) DBG | <on_crash>destroy</on_crash>
I1018 12:26:40.393774 54024 main.go:141] libmachine: (kindnet-720125) DBG | <devices>
I1018 12:26:40.393805 54024 main.go:141] libmachine: (kindnet-720125) DBG | <emulator>/usr/bin/qemu-system-x86_64</emulator>
I1018 12:26:40.393828 54024 main.go:141] libmachine: (kindnet-720125) DBG | <disk type='file' device='cdrom'>
I1018 12:26:40.393841 54024 main.go:141] libmachine: (kindnet-720125) DBG | <driver name='qemu' type='raw'/>
I1018 12:26:40.393857 54024 main.go:141] libmachine: (kindnet-720125) DBG | <source file='/home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/boot2docker.iso'/>
I1018 12:26:40.393871 54024 main.go:141] libmachine: (kindnet-720125) DBG | <target dev='hdc' bus='scsi'/>
I1018 12:26:40.393896 54024 main.go:141] libmachine: (kindnet-720125) DBG | <readonly/>
I1018 12:26:40.393912 54024 main.go:141] libmachine: (kindnet-720125) DBG | <address type='drive' controller='0' bus='0' target='0' unit='2'/>
I1018 12:26:40.393927 54024 main.go:141] libmachine: (kindnet-720125) DBG | </disk>
I1018 12:26:40.393940 54024 main.go:141] libmachine: (kindnet-720125) DBG | <disk type='file' device='disk'>
I1018 12:26:40.393952 54024 main.go:141] libmachine: (kindnet-720125) DBG | <driver name='qemu' type='raw' io='threads'/>
I1018 12:26:40.393965 54024 main.go:141] libmachine: (kindnet-720125) DBG | <source file='/home/jenkins/minikube-integration/21647-6010/.minikube/machines/kindnet-720125/kindnet-720125.rawdisk'/>
I1018 12:26:40.393971 54024 main.go:141] libmachine: (kindnet-720125) DBG | <target dev='hda' bus='virtio'/>
I1018 12:26:40.393982 54024 main.go:141] libmachine: (kindnet-720125) DBG | <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
I1018 12:26:40.393987 54024 main.go:141] libmachine: (kindnet-720125) DBG | </disk>
I1018 12:26:40.393996 54024 main.go:141] libmachine: (kindnet-720125) DBG | <controller type='usb' index='0' model='piix3-uhci'>
I1018 12:26:40.394012 54024 main.go:141] libmachine: (kindnet-720125) DBG | <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
I1018 12:26:40.394022 54024 main.go:141] libmachine: (kindnet-720125) DBG | </controller>
I1018 12:26:40.394034 54024 main.go:141] libmachine: (kindnet-720125) DBG | <controller type='pci' index='0' model='pci-root'/>
I1018 12:26:40.394049 54024 main.go:141] libmachine: (kindnet-720125) DBG | <controller type='scsi' index='0' model='lsilogic'>
I1018 12:26:40.394062 54024 main.go:141] libmachine: (kindnet-720125) DBG | <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
I1018 12:26:40.394074 54024 main.go:141] libmachine: (kindnet-720125) DBG | </controller>
I1018 12:26:40.394090 54024 main.go:141] libmachine: (kindnet-720125) DBG | <interface type='network'>
I1018 12:26:40.394101 54024 main.go:141] libmachine: (kindnet-720125) DBG | <mac address='52:54:00:0e:b7:f4'/>
I1018 12:26:40.394112 54024 main.go:141] libmachine: (kindnet-720125) DBG | <source network='mk-kindnet-720125'/>
I1018 12:26:40.394129 54024 main.go:141] libmachine: (kindnet-720125) DBG | <model type='virtio'/>
I1018 12:26:40.394144 54024 main.go:141] libmachine: (kindnet-720125) DBG | <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
I1018 12:26:40.394159 54024 main.go:141] libmachine: (kindnet-720125) DBG | </interface>
I1018 12:26:40.394175 54024 main.go:141] libmachine: (kindnet-720125) DBG | <interface type='network'>
I1018 12:26:40.394193 54024 main.go:141] libmachine: (kindnet-720125) DBG | <mac address='52:54:00:3f:a0:78'/>
I1018 12:26:40.394204 54024 main.go:141] libmachine: (kindnet-720125) DBG | <source network='default'/>
I1018 12:26:40.394215 54024 main.go:141] libmachine: (kindnet-720125) DBG | <model type='virtio'/>
I1018 12:26:40.394226 54024 main.go:141] libmachine: (kindnet-720125) DBG | <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
I1018 12:26:40.394235 54024 main.go:141] libmachine: (kindnet-720125) DBG | </interface>
I1018 12:26:40.394244 54024 main.go:141] libmachine: (kindnet-720125) DBG | <serial type='pty'>
I1018 12:26:40.394254 54024 main.go:141] libmachine: (kindnet-720125) DBG | <target type='isa-serial' port='0'>
I1018 12:26:40.394281 54024 main.go:141] libmachine: (kindnet-720125) DBG | <model name='isa-serial'/>
I1018 12:26:40.394319 54024 main.go:141] libmachine: (kindnet-720125) DBG | </target>
I1018 12:26:40.394338 54024 main.go:141] libmachine: (kindnet-720125) DBG | </serial>
I1018 12:26:40.394356 54024 main.go:141] libmachine: (kindnet-720125) DBG | <console type='pty'>
I1018 12:26:40.394370 54024 main.go:141] libmachine: (kindnet-720125) DBG | <target type='serial' port='0'/>
I1018 12:26:40.394380 54024 main.go:141] libmachine: (kindnet-720125) DBG | </console>
I1018 12:26:40.394393 54024 main.go:141] libmachine: (kindnet-720125) DBG | <input type='mouse' bus='ps2'/>
I1018 12:26:40.394402 54024 main.go:141] libmachine: (kindnet-720125) DBG | <input type='keyboard' bus='ps2'/>
I1018 12:26:40.394415 54024 main.go:141] libmachine: (kindnet-720125) DBG | <audio id='1' type='none'/>
I1018 12:26:40.394423 54024 main.go:141] libmachine: (kindnet-720125) DBG | <memballoon model='virtio'>
I1018 12:26:40.394443 54024 main.go:141] libmachine: (kindnet-720125) DBG | <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
I1018 12:26:40.394459 54024 main.go:141] libmachine: (kindnet-720125) DBG | </memballoon>
I1018 12:26:40.394470 54024 main.go:141] libmachine: (kindnet-720125) DBG | <rng model='virtio'>
I1018 12:26:40.394482 54024 main.go:141] libmachine: (kindnet-720125) DBG | <backend model='random'>/dev/random</backend>
I1018 12:26:40.394496 54024 main.go:141] libmachine: (kindnet-720125) DBG | <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
I1018 12:26:40.394505 54024 main.go:141] libmachine: (kindnet-720125) DBG | </rng>
I1018 12:26:40.394513 54024 main.go:141] libmachine: (kindnet-720125) DBG | </devices>
I1018 12:26:40.394522 54024 main.go:141] libmachine: (kindnet-720125) DBG | </domain>
I1018 12:26:40.394542 54024 main.go:141] libmachine: (kindnet-720125) DBG |
I1018 12:26:41.782659 54024 main.go:141] libmachine: (kindnet-720125) waiting for domain to start...
I1018 12:26:41.784057 54024 main.go:141] libmachine: (kindnet-720125) domain is now running
I1018 12:26:41.784080 54024 main.go:141] libmachine: (kindnet-720125) waiting for IP...
I1018 12:26:41.784831 54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
I1018 12:26:41.785431 54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
I1018 12:26:41.785459 54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
I1018 12:26:41.785812 54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
I1018 12:26:41.785887 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:41.785810 54053 retry.go:31] will retry after 204.388807ms: waiting for domain to come up
I1018 12:26:41.992592 54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
I1018 12:26:41.993377 54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
I1018 12:26:41.993404 54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
I1018 12:26:41.993817 54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
I1018 12:26:41.993887 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:41.993817 54053 retry.go:31] will retry after 374.842513ms: waiting for domain to come up
I1018 12:26:42.370189 54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
I1018 12:26:42.370750 54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
I1018 12:26:42.370778 54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
I1018 12:26:42.371199 54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
I1018 12:26:42.371231 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:42.371171 54053 retry.go:31] will retry after 382.206082ms: waiting for domain to come up
I1018 12:26:42.755732 54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
I1018 12:26:42.756456 54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
I1018 12:26:42.756481 54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
I1018 12:26:42.756848 54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
I1018 12:26:42.756877 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:42.756832 54053 retry.go:31] will retry after 434.513358ms: waiting for domain to come up
I1018 12:26:43.192495 54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
I1018 12:26:43.193112 54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
I1018 12:26:43.193137 54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
I1018 12:26:43.193557 54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
I1018 12:26:43.193584 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:43.193492 54053 retry.go:31] will retry after 622.396959ms: waiting for domain to come up
I1018 12:26:43.818233 54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
I1018 12:26:43.819067 54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
I1018 12:26:43.819104 54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
I1018 12:26:43.819584 54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
I1018 12:26:43.819616 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:43.819536 54053 retry.go:31] will retry after 815.894877ms: waiting for domain to come up
I1018 12:26:44.636575 54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
I1018 12:26:44.637323 54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
I1018 12:26:44.637353 54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
I1018 12:26:44.637721 54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
I1018 12:26:44.637759 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:44.637705 54053 retry.go:31] will retry after 1.067259778s: waiting for domain to come up
I1018 12:26:43.775588 52813 out.go:252] - Booting up control plane ...
I1018 12:26:43.775698 52813 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1018 12:26:43.775800 52813 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1018 12:26:43.777341 52813 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1018 12:26:43.800502 52813 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1018 12:26:43.800688 52813 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1018 12:26:43.808677 52813 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1018 12:26:43.808867 52813 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1018 12:26:43.809016 52813 kubeadm.go:318] [kubelet-start] Starting the kubelet
I1018 12:26:43.996155 52813 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1018 12:26:43.996352 52813 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1018 12:26:44.997230 52813 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001669295s
I1018 12:26:45.000531 52813 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1018 12:26:45.000667 52813 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.72.13:8443/livez
I1018 12:26:45.000814 52813 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1018 12:26:45.000947 52813 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1018 12:26:44.439803 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:44.440530 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:44.940153 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:44.940832 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:45.439761 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:45.440519 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:45.940122 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:45.940844 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:46.439543 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:46.440225 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:46.939926 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:46.940690 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:47.440072 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:47.440765 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:47.940122 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:47.940902 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:48.440476 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:48.441175 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:48.940453 52283 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
I1018 12:26:48.941104 52283 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
I1018 12:26:45.706998 54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
I1018 12:26:45.707808 54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
I1018 12:26:45.707838 54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
I1018 12:26:45.708201 54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
I1018 12:26:45.708263 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:45.708195 54053 retry.go:31] will retry after 1.310839951s: waiting for domain to come up
I1018 12:26:47.020928 54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
I1018 12:26:47.021787 54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
I1018 12:26:47.021817 54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
I1018 12:26:47.022144 54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
I1018 12:26:47.022169 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:47.022128 54053 retry.go:31] will retry after 1.184917747s: waiting for domain to come up
I1018 12:26:48.208893 54024 main.go:141] libmachine: (kindnet-720125) DBG | domain kindnet-720125 has defined MAC address 52:54:00:0e:b7:f4 in network mk-kindnet-720125
I1018 12:26:48.210115 54024 main.go:141] libmachine: (kindnet-720125) DBG | no network interface addresses found for domain kindnet-720125 (source=lease)
I1018 12:26:48.210353 54024 main.go:141] libmachine: (kindnet-720125) DBG | trying to list again with source=arp
I1018 12:26:48.210378 54024 main.go:141] libmachine: (kindnet-720125) DBG | unable to find current IP address of domain kindnet-720125 in network mk-kindnet-720125 (interfaces detected: [])
I1018 12:26:48.210400 54024 main.go:141] libmachine: (kindnet-720125) DBG | I1018 12:26:48.210282 54053 retry.go:31] will retry after 2.142899269s: waiting for domain to come up
I1018 12:26:47.544998 52813 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.544969296s
I1018 12:26:49.216065 52813 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.216981491s
I1018 12:26:52.002383 52813 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.003405486s
I1018 12:26:52.027872 52813 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1018 12:26:52.051441 52813 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1018 12:26:52.081495 52813 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
I1018 12:26:52.081766 52813 kubeadm.go:318] [mark-control-plane] Marking the node auto-720125 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1018 12:26:52.106887 52813 kubeadm.go:318] [bootstrap-token] Using token: j4uyf3.sh7e2l27mgyytkmc
==> Docker <==
Oct 18 12:25:54 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:25:54.120729117Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Oct 18 12:25:54 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:25:54.212112555Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Oct 18 12:25:54 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:25:54.212342190Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Oct 18 12:25:54 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:25:54Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
Oct 18 12:25:54 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:25:54.421865126Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Oct 18 12:26:02 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:26:02Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.830994794Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.904996286Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.905088942Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Oct 18 12:26:06 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:26:06Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.919653355Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.919692389Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.923070969Z" level=error msg="unexpected HTTP error handling" error="<nil>"
Oct 18 12:26:06 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:06.924597650Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Oct 18 12:26:14 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:14.766195371Z" level=info msg="ignoring event" container=28ffefdfcaefaa0dcc5a6077bf470cdb9475d6e21b7a7d96be86de74a8777734 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 18 12:26:48 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:26:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-jc7tz_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"50ccc6bf5c1dc8dbc44839aac4aaf80b91e88cfa36a35e71c99ecbc99a5d2efb\""
Oct 18 12:26:48 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:26:48Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
Oct 18 12:26:49 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:49.579823134Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Oct 18 12:26:49 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:49.579851904Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Oct 18 12:26:49 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:49.584080633Z" level=error msg="unexpected HTTP error handling" error="<nil>"
Oct 18 12:26:49 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:49.584132115Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Oct 18 12:26:49 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:49.670933568Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Oct 18 12:26:50 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:50.571698862Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Oct 18 12:26:50 default-k8s-diff-port-948988 dockerd[1170]: time="2025-10-18T12:26:50.571843908Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Oct 18 12:26:50 default-k8s-diff-port-948988 cri-dockerd[1540]: time="2025-10-18T12:26:50Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
bffc616999573 6e38f40d628db 4 seconds ago Running storage-provisioner 2 002d263a57e06 storage-provisioner
3a2c1a468e77b kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 51 seconds ago Running kubernetes-dashboard 0 22320121e1a75 kubernetes-dashboard-855c9754f9-8frzf
14a606bd02ea2 52546a367cc9e About a minute ago Running coredns 1 2bf7782642e47 coredns-66bc5c9577-s7znr
3181063a95749 56cc512116c8f About a minute ago Running busybox 1 f01a1904eab6f busybox
28ffefdfcaefa 6e38f40d628db About a minute ago Exited storage-provisioner 1 002d263a57e06 storage-provisioner
e74b601e6b20b fc25172553d79 About a minute ago Running kube-proxy 1 5916362f7151c kube-proxy-hmf6q
aa45133c5292e 7dd6aaa1717ab About a minute ago Running kube-scheduler 1 c386eff006256 kube-scheduler-default-k8s-diff-port-948988
0d33563cfd415 5f1f5298c888d About a minute ago Running etcd 1 aa5a738a016e1 etcd-default-k8s-diff-port-948988
482f645840fbd c3994bc696102 About a minute ago Running kube-apiserver 1 6d80f3bf62181 kube-apiserver-default-k8s-diff-port-948988
cbcb65b91df5f c80c8dbafe7dd About a minute ago Running kube-controller-manager 1 9b74e777c1d81 kube-controller-manager-default-k8s-diff-port-948988
06b0d6a0fe73a gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 2 minutes ago Exited busybox 0 02768f34f11ea busybox
bf61d222c7e61 52546a367cc9e 2 minutes ago Exited coredns 0 4a9e23fe5352b coredns-66bc5c9577-s7znr
72d0dd1b3e6d1 fc25172553d79 2 minutes ago Exited kube-proxy 0 3b1b31ff39772 kube-proxy-hmf6q
ac171ed99aa7b 7dd6aaa1717ab 2 minutes ago Exited kube-scheduler 0 27f94a06346ec kube-scheduler-default-k8s-diff-port-948988
07dc691cd2b41 c80c8dbafe7dd 2 minutes ago Exited kube-controller-manager 0 7c2c9ab301ac9 kube-controller-manager-default-k8s-diff-port-948988
5a3d271b1a7a4 5f1f5298c888d 2 minutes ago Exited etcd 0 7776a7d62b3b1 etcd-default-k8s-diff-port-948988
5dfc625534d2e c3994bc696102 2 minutes ago Exited kube-apiserver 0 20ac876b72a06 kube-apiserver-default-k8s-diff-port-948988
==> coredns [14a606bd02ea] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:47328 - 15007 "HINFO IN 5766678739025722613.5866360335637854453. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.103273346s
==> coredns [bf61d222c7e6] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:48576 - 64076 "HINFO IN 6932009071857870960.7176900972779109838. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.13763s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: default-k8s-diff-port-948988
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=default-k8s-diff-port-948988
kubernetes.io/os=linux
minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
minikube.k8s.io/name=default-k8s-diff-port-948988
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_10_18T12_24_33_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 18 Oct 2025 12:24:29 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: default-k8s-diff-port-948988
AcquireTime: <unset>
RenewTime: Sat, 18 Oct 2025 12:26:48 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 18 Oct 2025 12:26:48 +0000 Sat, 18 Oct 2025 12:24:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 18 Oct 2025 12:26:48 +0000 Sat, 18 Oct 2025 12:24:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 18 Oct 2025 12:26:48 +0000 Sat, 18 Oct 2025 12:24:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 18 Oct 2025 12:26:48 +0000 Sat, 18 Oct 2025 12:25:53 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.61.154
Hostname: default-k8s-diff-port-948988
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3042712Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3042712Ki
pods: 110
System Info:
Machine ID: d7b095482f0f4bd294376564492aae84
System UUID: d7b09548-2f0f-4bd2-9437-6564492aae84
Boot ID: 5dbb338e-d666-4176-8009-ddf389982046
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://28.5.1
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m5s
kube-system coredns-66bc5c9577-s7znr 100m (5%) 0 (0%) 70Mi (2%) 170Mi (5%) 2m13s
kube-system etcd-default-k8s-diff-port-948988 100m (5%) 0 (0%) 100Mi (3%) 0 (0%) 2m21s
kube-system kube-apiserver-default-k8s-diff-port-948988 250m (12%) 0 (0%) 0 (0%) 0 (0%) 2m21s
kube-system kube-controller-manager-default-k8s-diff-port-948988 200m (10%) 0 (0%) 0 (0%) 0 (0%) 2m21s
kube-system kube-proxy-hmf6q 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m15s
kube-system kube-scheduler-default-k8s-diff-port-948988 100m (5%) 0 (0%) 0 (0%) 0 (0%) 2m22s
kube-system metrics-server-746fcd58dc-7788d 100m (5%) 0 (0%) 200Mi (6%) 0 (0%) 114s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m13s
kubernetes-dashboard dashboard-metrics-scraper-6ffb444bf9-gxs6s 0 (0%) 0 (0%) 0 (0%) 0 (0%) 66s
kubernetes-dashboard kubernetes-dashboard-855c9754f9-8frzf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 66s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 370Mi (12%) 170Mi (5%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m11s kube-proxy
Normal Starting 66s kube-proxy
Normal Starting 2m29s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m28s (x8 over 2m28s) kubelet Node default-k8s-diff-port-948988 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m28s (x8 over 2m28s) kubelet Node default-k8s-diff-port-948988 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m28s (x7 over 2m28s) kubelet Node default-k8s-diff-port-948988 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m28s kubelet Updated Node Allocatable limit across pods
Normal Starting 2m21s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 2m21s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 2m21s kubelet Node default-k8s-diff-port-948988 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m21s kubelet Node default-k8s-diff-port-948988 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m21s kubelet Node default-k8s-diff-port-948988 status is now: NodeHasSufficientPID
Normal NodeReady 2m17s kubelet Node default-k8s-diff-port-948988 status is now: NodeReady
Normal RegisteredNode 2m16s node-controller Node default-k8s-diff-port-948988 event: Registered Node default-k8s-diff-port-948988 in Controller
Normal Starting 75s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 75s (x8 over 75s) kubelet Node default-k8s-diff-port-948988 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 75s (x8 over 75s) kubelet Node default-k8s-diff-port-948988 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 75s (x7 over 75s) kubelet Node default-k8s-diff-port-948988 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 75s kubelet Updated Node Allocatable limit across pods
Warning Rebooted 71s kubelet Node default-k8s-diff-port-948988 has been rebooted, boot id: 5dbb338e-d666-4176-8009-ddf389982046
Normal RegisteredNode 67s node-controller Node default-k8s-diff-port-948988 event: Registered Node default-k8s-diff-port-948988 in Controller
Normal Starting 5s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 5s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 5s kubelet Node default-k8s-diff-port-948988 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5s kubelet Node default-k8s-diff-port-948988 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5s kubelet Node default-k8s-diff-port-948988 status is now: NodeHasSufficientPID
==> dmesg <==
[Oct18 12:25] Booted with the nomodeset parameter. Only the system framebuffer will be available
[ +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
[ +0.001590] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +0.004075] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
[ +0.931702] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
[ +0.130272] kauditd_printk_skb: 1 callbacks suppressed
[ +0.102368] kauditd_printk_skb: 449 callbacks suppressed
[ +5.669077] kauditd_printk_skb: 165 callbacks suppressed
[ +5.952206] kauditd_printk_skb: 134 callbacks suppressed
[ +2.969146] kauditd_printk_skb: 264 callbacks suppressed
[Oct18 12:26] kauditd_printk_skb: 11 callbacks suppressed
[ +0.224441] kauditd_printk_skb: 35 callbacks suppressed
==> etcd [0d33563cfd41] <==
{"level":"info","ts":"2025-10-18T12:26:50.186827Z","caller":"traceutil/trace.go:172","msg":"trace[1372174769] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:752; }","duration":"399.841982ms","start":"2025-10-18T12:26:49.786974Z","end":"2025-10-18T12:26:50.186816Z","steps":["trace[1372174769] 'range keys from in-memory index tree' (duration: 399.699339ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-18T12:26:50.186874Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.786955Z","time spent":"399.895498ms","remote":"127.0.0.1:58530","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
{"level":"info","ts":"2025-10-18T12:26:50.333810Z","caller":"traceutil/trace.go:172","msg":"trace[111824645] linearizableReadLoop","detail":"{readStateIndex:805; appliedIndex:805; }","duration":"469.70081ms","start":"2025-10-18T12:26:49.864083Z","end":"2025-10-18T12:26:50.333784Z","steps":["trace[111824645] 'read index received' (duration: 469.662848ms)","trace[111824645] 'applied index is now lower than readState.Index' (duration: 36.562µs)"],"step_count":2}
{"level":"warn","ts":"2025-10-18T12:26:50.333966Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"469.888536ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-10-18T12:26:50.334000Z","caller":"traceutil/trace.go:172","msg":"trace[512175939] range","detail":"{range_begin:/registry/flowschemas; range_end:; response_count:0; response_revision:752; }","duration":"469.93891ms","start":"2025-10-18T12:26:49.864053Z","end":"2025-10-18T12:26:50.333992Z","steps":["trace[512175939] 'agreement among raft nodes before linearized reading' (duration: 469.85272ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-18T12:26:50.334133Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.864029Z","time spent":"469.995ms","remote":"127.0.0.1:59436","response type":"/etcdserverpb.KV/Range","request count":0,"request size":25,"response count":0,"response size":27,"request content":"key:\"/registry/flowschemas\" limit:1 "}
{"level":"info","ts":"2025-10-18T12:26:50.334869Z","caller":"traceutil/trace.go:172","msg":"trace[1055338688] transaction","detail":"{read_only:false; number_of_response:0; response_revision:752; }","duration":"495.901712ms","start":"2025-10-18T12:26:49.838955Z","end":"2025-10-18T12:26:50.334857Z","steps":["trace[1055338688] 'process raft request' (duration: 495.716875ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-18T12:26:50.335648Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.838929Z","time spent":"495.989792ms","remote":"127.0.0.1:58854","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":27,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-948988\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-948988\" value_size:3336 >> failure:<>"}
{"level":"info","ts":"2025-10-18T12:26:50.443549Z","caller":"traceutil/trace.go:172","msg":"trace[381001447] linearizableReadLoop","detail":"{readStateIndex:806; appliedIndex:806; }","duration":"109.522762ms","start":"2025-10-18T12:26:50.333879Z","end":"2025-10-18T12:26:50.443401Z","steps":["trace[381001447] 'read index received' (duration: 109.304835ms)","trace[381001447] 'applied index is now lower than readState.Index' (duration: 216.349µs)"],"step_count":2}
{"level":"warn","ts":"2025-10-18T12:26:50.443898Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"254.661283ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-10-18T12:26:50.444087Z","caller":"traceutil/trace.go:172","msg":"trace[269629089] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:752; }","duration":"254.861648ms","start":"2025-10-18T12:26:50.189213Z","end":"2025-10-18T12:26:50.444075Z","steps":["trace[269629089] 'agreement among raft nodes before linearized reading' (duration: 254.569015ms)"],"step_count":1}
{"level":"info","ts":"2025-10-18T12:26:50.444986Z","caller":"traceutil/trace.go:172","msg":"trace[1424081342] transaction","detail":"{read_only:false; number_of_response:0; response_revision:752; }","duration":"604.238859ms","start":"2025-10-18T12:26:49.840736Z","end":"2025-10-18T12:26:50.444975Z","steps":["trace[1424081342] 'process raft request' (duration: 603.242308ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-18T12:26:50.445058Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"481.542092ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2025-10-18T12:26:50.445075Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.840723Z","time spent":"604.304586ms","remote":"127.0.0.1:58854","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":27,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-948988\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-948988\" value_size:5080 >> failure:<>"}
{"level":"info","ts":"2025-10-18T12:26:50.445122Z","caller":"traceutil/trace.go:172","msg":"trace[399968637] range","detail":"{range_begin:/registry/certificatesigningrequests; range_end:; response_count:0; response_revision:752; }","duration":"481.574042ms","start":"2025-10-18T12:26:49.963502Z","end":"2025-10-18T12:26:50.445076Z","steps":["trace[399968637] 'agreement among raft nodes before linearized reading' (duration: 481.324719ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-18T12:26:50.445200Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.963483Z","time spent":"481.704642ms","remote":"127.0.0.1:58990","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":0,"response size":27,"request content":"key:\"/registry/certificatesigningrequests\" limit:1 "}
{"level":"info","ts":"2025-10-18T12:26:50.446712Z","caller":"traceutil/trace.go:172","msg":"trace[824860143] transaction","detail":"{read_only:false; number_of_response:0; response_revision:752; }","duration":"606.054697ms","start":"2025-10-18T12:26:49.840601Z","end":"2025-10-18T12:26:50.446656Z","steps":["trace[824860143] 'process raft request' (duration: 603.007187ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-18T12:26:50.446779Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.840584Z","time spent":"606.160126ms","remote":"127.0.0.1:58854","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":27,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-948988\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-948988\" value_size:5531 >> failure:<>"}
{"level":"info","ts":"2025-10-18T12:26:50.446897Z","caller":"traceutil/trace.go:172","msg":"trace[1942397087] transaction","detail":"{read_only:false; number_of_response:0; response_revision:752; }","duration":"606.190325ms","start":"2025-10-18T12:26:49.840699Z","end":"2025-10-18T12:26:50.446890Z","steps":["trace[1942397087] 'process raft request' (duration: 603.239357ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-18T12:26:50.446935Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-18T12:26:49.840694Z","time spent":"606.222506ms","remote":"127.0.0.1:58854","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":27,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-948988\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-948988\" value_size:4413 >> failure:<>"}
{"level":"warn","ts":"2025-10-18T12:26:50.446998Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.548699ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-948988\" limit:1 ","response":"range_response_count:1 size:4976"}
{"level":"info","ts":"2025-10-18T12:26:50.447420Z","caller":"traceutil/trace.go:172","msg":"trace[673088281] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-948988; range_end:; response_count:1; response_revision:753; }","duration":"106.587183ms","start":"2025-10-18T12:26:50.340430Z","end":"2025-10-18T12:26:50.447017Z","steps":["trace[673088281] 'agreement among raft nodes before linearized reading' (duration: 106.46749ms)"],"step_count":1}
{"level":"info","ts":"2025-10-18T12:26:50.448436Z","caller":"traceutil/trace.go:172","msg":"trace[1596410668] transaction","detail":"{read_only:false; response_revision:753; number_of_response:1; }","duration":"250.464751ms","start":"2025-10-18T12:26:50.197959Z","end":"2025-10-18T12:26:50.448424Z","steps":["trace[1596410668] 'process raft request' (duration: 246.217803ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-18T12:26:50.448558Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.631999ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-10-18T12:26:50.448589Z","caller":"traceutil/trace.go:172","msg":"trace[1722869229] range","detail":"{range_begin:/registry/runtimeclasses; range_end:; response_count:0; response_revision:753; }","duration":"100.661173ms","start":"2025-10-18T12:26:50.347914Z","end":"2025-10-18T12:26:50.448575Z","steps":["trace[1722869229] 'agreement among raft nodes before linearized reading' (duration: 100.605021ms)"],"step_count":1}
==> etcd [5a3d271b1a7a] <==
{"level":"info","ts":"2025-10-18T12:24:40.137898Z","caller":"traceutil/trace.go:172","msg":"trace[1031995627] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"153.504515ms","start":"2025-10-18T12:24:39.984387Z","end":"2025-10-18T12:24:40.137891Z","steps":["trace[1031995627] 'process raft request' (duration: 106.790781ms)","trace[1031995627] 'compare' (duration: 46.286033ms)"],"step_count":2}
{"level":"info","ts":"2025-10-18T12:24:40.138807Z","caller":"traceutil/trace.go:172","msg":"trace[2073145057] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"154.722362ms","start":"2025-10-18T12:24:39.984073Z","end":"2025-10-18T12:24:40.138795Z","steps":["trace[2073145057] 'process raft request' (duration: 153.550593ms)"],"step_count":1}
{"level":"info","ts":"2025-10-18T12:24:40.138990Z","caller":"traceutil/trace.go:172","msg":"trace[460852249] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"147.204006ms","start":"2025-10-18T12:24:39.991724Z","end":"2025-10-18T12:24:40.138928Z","steps":["trace[460852249] 'process raft request' (duration: 145.946011ms)"],"step_count":1}
{"level":"info","ts":"2025-10-18T12:24:40.139208Z","caller":"traceutil/trace.go:172","msg":"trace[1691503075] transaction","detail":"{read_only:false; response_revision:368; number_of_response:1; }","duration":"130.816492ms","start":"2025-10-18T12:24:40.008382Z","end":"2025-10-18T12:24:40.139199Z","steps":["trace[1691503075] 'process raft request' (duration: 129.325269ms)"],"step_count":1}
{"level":"info","ts":"2025-10-18T12:24:40.144497Z","caller":"traceutil/trace.go:172","msg":"trace[842550493] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"135.72185ms","start":"2025-10-18T12:24:40.008758Z","end":"2025-10-18T12:24:40.144480Z","steps":["trace[842550493] 'process raft request' (duration: 128.981035ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-18T12:24:40.144822Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.354219ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" limit:1 ","response":"range_response_count:1 size:207"}
{"level":"info","ts":"2025-10-18T12:24:40.144866Z","caller":"traceutil/trace.go:172","msg":"trace[397740631] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:370; }","duration":"122.41407ms","start":"2025-10-18T12:24:40.022443Z","end":"2025-10-18T12:24:40.144857Z","steps":["trace[397740631] 'agreement among raft nodes before linearized reading' (duration: 122.2939ms)"],"step_count":1}
{"level":"info","ts":"2025-10-18T12:25:00.231361Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2025-10-18T12:25:00.231451Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"default-k8s-diff-port-948988","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.154:2380"],"advertise-client-urls":["https://192.168.61.154:2379"]}
{"level":"error","ts":"2025-10-18T12:25:00.231556Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-10-18T12:25:07.245321Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-10-18T12:25:07.249128Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-10-18T12:25:07.249192Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3cb84593c3b1392d","current-leader-member-id":"3cb84593c3b1392d"}
{"level":"info","ts":"2025-10-18T12:25:07.249489Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
{"level":"info","ts":"2025-10-18T12:25:07.249534Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
{"level":"warn","ts":"2025-10-18T12:25:07.252745Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-10-18T12:25:07.252848Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"error","ts":"2025-10-18T12:25:07.252863Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"warn","ts":"2025-10-18T12:25:07.253498Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.154:2379: use of closed network connection"}
{"level":"warn","ts":"2025-10-18T12:25:07.253553Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.154:2379: use of closed network connection"}
{"level":"error","ts":"2025-10-18T12:25:07.253569Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.61.154:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-10-18T12:25:07.256384Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.61.154:2380"}
{"level":"error","ts":"2025-10-18T12:25:07.256475Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.61.154:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-10-18T12:25:07.256703Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.61.154:2380"}
{"level":"info","ts":"2025-10-18T12:25:07.256718Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"default-k8s-diff-port-948988","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.154:2380"],"advertise-client-urls":["https://192.168.61.154:2379"]}
==> kernel <==
12:26:53 up 1 min, 0 users, load average: 2.38, 0.75, 0.26
Linux default-k8s-diff-port-948988 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [482f645840fb] <==
E1018 12:25:43.880029 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I1018 12:25:43.880149 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1018 12:25:43.881283 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1018 12:25:44.600365 1 controller.go:667] quota admission added evaluator for: deployments.apps
I1018 12:25:44.665650 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I1018 12:25:44.707914 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1018 12:25:44.717555 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1018 12:25:46.458993 1 controller.go:667] quota admission added evaluator for: endpoints
I1018 12:25:46.554520 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1018 12:25:46.699128 1 controller.go:667] quota admission added evaluator for: replicasets.apps
I1018 12:25:47.509491 1 controller.go:667] quota admission added evaluator for: namespaces
I1018 12:25:47.794476 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.100.186"}
I1018 12:25:47.820795 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.78.66"}
W1018 12:26:47.665841 1 handler_proxy.go:99] no RequestInfo found in the context
E1018 12:26:47.666026 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I1018 12:26:47.666042 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W1018 12:26:47.681677 1 handler_proxy.go:99] no RequestInfo found in the context
E1018 12:26:47.681971 1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
I1018 12:26:47.682341 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
==> kube-apiserver [5dfc625534d2] <==
W1018 12:25:09.464721 1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.517443 1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.620363 1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.693884 1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.721047 1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.726611 1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.759371 1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.795061 1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.819207 1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.841071 1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.864445 1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.896679 1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.930411 1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:09.971423 1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:10.017882 1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:10.045148 1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:10.067233 1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:10.127112 1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:10.133877 1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:10.157359 1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:10.165740 1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:10.173381 1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:10.191257 1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:10.254823 1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1018 12:25:10.300085 1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
==> kube-controller-manager [07dc691cd2b4] <==
I1018 12:24:37.212816 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
I1018 12:24:37.213552 1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
I1018 12:24:37.214863 1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
I1018 12:24:37.215195 1 shared_informer.go:356] "Caches are synced" controller="service account"
I1018 12:24:37.215506 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
I1018 12:24:37.215712 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
I1018 12:24:37.215992 1 shared_informer.go:356] "Caches are synced" controller="PV protection"
I1018 12:24:37.216210 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
I1018 12:24:37.216297 1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
I1018 12:24:37.220772 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1018 12:24:37.221277 1 shared_informer.go:356] "Caches are synced" controller="disruption"
I1018 12:24:37.229865 1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-948988" podCIDRs=["10.244.0.0/24"]
I1018 12:24:37.230483 1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
I1018 12:24:37.235336 1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
I1018 12:24:37.236208 1 shared_informer.go:356] "Caches are synced" controller="deployment"
I1018 12:24:37.243773 1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
I1018 12:24:37.261496 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
I1018 12:24:37.262756 1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
I1018 12:24:37.263515 1 shared_informer.go:356] "Caches are synced" controller="stateful set"
I1018 12:24:37.263680 1 shared_informer.go:356] "Caches are synced" controller="attach detach"
I1018 12:24:37.332884 1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
I1018 12:24:37.408817 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1018 12:24:37.409172 1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
I1018 12:24:37.409412 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
I1018 12:24:37.433850 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
==> kube-controller-manager [cbcb65b91df5] <==
I1018 12:25:46.326514 1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
I1018 12:25:46.330568 1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
I1018 12:25:46.338200 1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
I1018 12:25:46.354827 1 shared_informer.go:356] "Caches are synced" controller="disruption"
I1018 12:25:46.354933 1 shared_informer.go:356] "Caches are synced" controller="attach detach"
I1018 12:25:46.358135 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1018 12:25:46.358166 1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
I1018 12:25:46.358174 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
I1018 12:25:46.361699 1 shared_informer.go:356] "Caches are synced" controller="taint"
I1018 12:25:46.362331 1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
I1018 12:25:46.362518 1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-948988"
I1018 12:25:46.362582 1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
I1018 12:25:46.362715 1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
I1018 12:25:46.364998 1 shared_informer.go:356] "Caches are synced" controller="GC"
I1018 12:25:46.397419 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1018 12:25:47.622164 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1018 12:25:47.637442 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1018 12:25:47.640602 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1018 12:25:47.654283 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1018 12:25:47.654837 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1018 12:25:47.670862 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1018 12:25:47.673502 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9\" failed with pods \"dashboard-metrics-scraper-6ffb444bf9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I1018 12:25:56.364778 1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
E1018 12:26:47.748771 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I1018 12:26:47.764048 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
==> kube-proxy [72d0dd1b3e6d] <==
I1018 12:24:41.564008 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1018 12:24:41.664708 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1018 12:24:41.664884 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.61.154"]
E1018 12:24:41.665067 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1018 12:24:41.766806 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1018 12:24:41.766902 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1018 12:24:41.767037 1 server_linux.go:132] "Using iptables Proxier"
I1018 12:24:41.808707 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1018 12:24:41.810126 1 server.go:527] "Version info" version="v1.34.1"
I1018 12:24:41.810170 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1018 12:24:41.819567 1 config.go:200] "Starting service config controller"
I1018 12:24:41.819614 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1018 12:24:41.819656 1 config.go:106] "Starting endpoint slice config controller"
I1018 12:24:41.819662 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1018 12:24:41.819679 1 config.go:403] "Starting serviceCIDR config controller"
I1018 12:24:41.819685 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1018 12:24:41.834904 1 config.go:309] "Starting node config controller"
I1018 12:24:41.835028 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1018 12:24:41.835056 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1018 12:24:41.927064 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1018 12:24:41.927258 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1018 12:24:41.927530 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-proxy [e74b601e6b20] <==
I1018 12:25:45.811654 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1018 12:25:45.913019 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1018 12:25:45.913130 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.61.154"]
E1018 12:25:45.913538 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1018 12:25:46.627631 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1018 12:25:46.627729 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1018 12:25:46.627769 1 server_linux.go:132] "Using iptables Proxier"
I1018 12:25:46.729383 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1018 12:25:46.742257 1 server.go:527] "Version info" version="v1.34.1"
I1018 12:25:46.742299 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1018 12:25:46.769189 1 config.go:309] "Starting node config controller"
I1018 12:25:46.769207 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1018 12:25:46.769215 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1018 12:25:46.772876 1 config.go:403] "Starting serviceCIDR config controller"
I1018 12:25:46.772985 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1018 12:25:46.773282 1 config.go:200] "Starting service config controller"
I1018 12:25:46.773361 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1018 12:25:46.773393 1 config.go:106] "Starting endpoint slice config controller"
I1018 12:25:46.773398 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1018 12:25:46.874997 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1018 12:25:46.875472 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1018 12:25:46.875491 1 shared_informer.go:356] "Caches are synced" controller="service config"
==> kube-scheduler [aa45133c5292] <==
I1018 12:25:40.892121 1 serving.go:386] Generated self-signed cert in-memory
W1018 12:25:42.779818 1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1018 12:25:42.779913 1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1018 12:25:42.779937 1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
W1018 12:25:42.779952 1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1018 12:25:42.837530 1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
I1018 12:25:42.837672 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1018 12:25:42.850332 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1018 12:25:42.850953 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I1018 12:25:42.851127 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1018 12:25:42.851921 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I1018 12:25:42.953076 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kube-scheduler [ac171ed99aa7] <==
E1018 12:24:29.521551 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1018 12:24:29.521602 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1018 12:24:29.521714 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1018 12:24:29.521771 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1018 12:24:29.521820 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1018 12:24:30.388364 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1018 12:24:30.423548 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1018 12:24:30.458398 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1018 12:24:30.471430 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1018 12:24:30.482651 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1018 12:24:30.502659 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1018 12:24:30.602254 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1018 12:24:30.613712 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1018 12:24:30.623631 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1018 12:24:30.752533 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1018 12:24:30.774425 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1018 12:24:30.882034 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1018 12:24:30.922203 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
I1018 12:24:32.510730 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1018 12:25:00.227081 1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
I1018 12:25:00.227204 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1018 12:25:00.227889 1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
I1018 12:25:00.228116 1 server.go:263] "[graceful-termination] secure server has stopped listening"
I1018 12:25:00.228207 1 server.go:265] "[graceful-termination] secure server is exiting"
E1018 12:25:00.228229 1 run.go:72] "command failed" err="finished without leader elect"
==> kubelet <==
Oct 18 12:26:48 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:48.808146 4182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-default-k8s-diff-port-948988"
Oct 18 12:26:48 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:48.818965 4182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-948988\" already exists" pod="kube-system/etcd-default-k8s-diff-port-948988"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.224325 4182 apiserver.go:52] "Watching apiserver"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.299725 4182 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.334900 4182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a2da4bd7-fb36-44bc-9e08-4ccbe934a19a-tmp\") pod \"storage-provisioner\" (UID: \"a2da4bd7-fb36-44bc-9e08-4ccbe934a19a\") " pod="kube-system/storage-provisioner"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.335035 4182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6dd74255-86cf-46b6-a050-2d1ec343837e-xtables-lock\") pod \"kube-proxy-hmf6q\" (UID: \"6dd74255-86cf-46b6-a050-2d1ec343837e\") " pod="kube-system/kube-proxy-hmf6q"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.335064 4182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6dd74255-86cf-46b6-a050-2d1ec343837e-lib-modules\") pod \"kube-proxy-hmf6q\" (UID: \"6dd74255-86cf-46b6-a050-2d1ec343837e\") " pod="kube-system/kube-proxy-hmf6q"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.559117 4182 scope.go:117] "RemoveContainer" containerID="28ffefdfcaefaa0dcc5a6077bf470cdb9475d6e21b7a7d96be86de74a8777734"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:49.584832 4182 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:49.584904 4182 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:49.585150 4182 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-7788d_kube-system(482bf974-0dde-4e8e-abde-4c6a50f08c8d): ErrImagePull: Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" logger="UnhandledError"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:49.585190 4182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-7788d" podUID="482bf974-0dde-4e8e-abde-4c6a50f08c8d"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.834067 4182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-default-k8s-diff-port-948988"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.834883 4182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-default-k8s-diff-port-948988"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.835048 4182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-default-k8s-diff-port-948988"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.835180 4182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-default-k8s-diff-port-948988"
Oct 18 12:26:49 default-k8s-diff-port-948988 kubelet[4182]: I1018 12:26:49.835659 4182 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26d8ce1671b6d868f5c427741052d8ba6bc2581e713fc06671798cbeaa0e2467"
Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.457040 4182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-default-k8s-diff-port-948988\" already exists" pod="kube-system/kube-scheduler-default-k8s-diff-port-948988"
Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.473284 4182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-948988\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-948988"
Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.474210 4182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-default-k8s-diff-port-948988\" already exists" pod="kube-system/kube-controller-manager-default-k8s-diff-port-948988"
Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.475377 4182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-948988\" already exists" pod="kube-system/etcd-default-k8s-diff-port-948988"
Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.587059 4182 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.587186 4182 kuberuntime_image.go:43] "Failed to pull image" err="Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.587563 4182 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-6ffb444bf9-gxs6s_kubernetes-dashboard(d9f0a621-1105-44d9-97ff-6ab18a09af31): ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" logger="UnhandledError"
Oct 18 12:26:50 default-k8s-diff-port-948988 kubelet[4182]: E1018 12:26:50.587744 4182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gxs6s" podUID="d9f0a621-1105-44d9-97ff-6ab18a09af31"
==> kubernetes-dashboard [3a2c1a468e77] <==
2025/10/18 12:26:02 Using namespace: kubernetes-dashboard
2025/10/18 12:26:02 Using in-cluster config to connect to apiserver
2025/10/18 12:26:02 Using secret token for csrf signing
2025/10/18 12:26:02 Initializing csrf token from kubernetes-dashboard-csrf secret
2025/10/18 12:26:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2025/10/18 12:26:02 Successful initial request to the apiserver, version: v1.34.1
2025/10/18 12:26:02 Generating JWE encryption key
2025/10/18 12:26:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2025/10/18 12:26:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2025/10/18 12:26:02 Initializing JWE encryption key from synchronized object
2025/10/18 12:26:02 Creating in-cluster Sidecar client
2025/10/18 12:26:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/10/18 12:26:02 Serving insecurely on HTTP port: 9090
2025/10/18 12:26:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/10/18 12:26:02 Starting overwatch
==> storage-provisioner [28ffefdfcaef] <==
I1018 12:25:44.727571 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1018 12:26:14.742942 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
==> storage-provisioner [bffc61699957] <==
I1018 12:26:50.783147 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1018 12:26:50.814482 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1018 12:26:50.815137 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
W1018 12:26:50.821977 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 12:26:50.846621 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
I1018 12:26:50.847757 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1018 12:26:50.849593 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-948988_d5651886-64a1-4b3a-a231-e6b997a61d94!
I1018 12:26:50.847834 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"da8257ea-b806-4225-a5c2-05037be28c2a", APIVersion:"v1", ResourceVersion:"762", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-948988_d5651886-64a1-4b3a-a231-e6b997a61d94 became leader
W1018 12:26:50.873898 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 12:26:50.904698 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
I1018 12:26:50.954115 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-948988_d5651886-64a1-4b3a-a231-e6b997a61d94!
W1018 12:26:52.910588 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1018 12:26:52.924501 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-948988 -n default-k8s-diff-port-948988
helpers_test.go:269: (dbg) Run: kubectl --context default-k8s-diff-port-948988 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-7788d dashboard-metrics-scraper-6ffb444bf9-gxs6s
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context default-k8s-diff-port-948988 describe pod metrics-server-746fcd58dc-7788d dashboard-metrics-scraper-6ffb444bf9-gxs6s
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-948988 describe pod metrics-server-746fcd58dc-7788d dashboard-metrics-scraper-6ffb444bf9-gxs6s: exit status 1 (81.481248ms)
** stderr **
Error from server (NotFound): pods "metrics-server-746fcd58dc-7788d" not found
Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-gxs6s" not found
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-948988 describe pod metrics-server-746fcd58dc-7788d dashboard-metrics-scraper-6ffb444bf9-gxs6s: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (40.41s)