=== RUN TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run: kubectl --context functional-470148 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:833: etcd is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:True} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:True} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.217 PodIP:192.168.39.217 StartTime:2024-08-12 10:32:08 +0000 UTC ContainerStatuses:[{Name:etcd State:{Waiting:<nil> Running:0xc001bdd110 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc0005e8070} Ready:true RestartCount:3 Image:registry.k8s.io/etcd:3.5.12-0 ImageID:docker-pullable://registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b ContainerID:docker://a25c22de2da6249de770ecc96c990b8b0e3386d4e869264ebf1f7cbf66a8fc12}]}
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:833: kube-apiserver is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:True} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.217 PodIP:192.168.39.217 StartTime:2024-08-12 10:33:32 +0000 UTC ContainerStatuses:[{Name:kube-apiserver State:{Waiting:<nil> Running:0xc001bdd170 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:registry.k8s.io/kube-apiserver:v1.30.3 ImageID:docker-pullable://registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c ContainerID:docker://c8647e19fcd0be1534837d157a1e464d81c60801a1dddf73e73318fbf9a0f9dd}]}
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:833: kube-controller-manager is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:True} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:True} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.217 PodIP:192.168.39.217 StartTime:2024-08-12 10:32:08 +0000 UTC ContainerStatuses:[{Name:kube-controller-manager State:{Waiting:<nil> Running:0xc001bdd1d0 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc0005e80e0} Ready:true RestartCount:3 Image:registry.k8s.io/kube-controller-manager:v1.30.3 ImageID:docker-pullable://registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7 ContainerID:docker://b9857f8f48fd9b2fe2d5b4fb0bf07b34494306062ed04d0f86d679e06c79f31e}]}
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-470148 -n functional-470148
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p functional-470148 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-470148 logs -n 25: (1.100398391s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| unpause | nospam-338210 --log_dir | nospam-338210 | jenkins | v1.33.1 | 12 Aug 24 10:28 UTC | 12 Aug 24 10:28 UTC |
| | /tmp/nospam-338210 unpause | | | | | |
| unpause | nospam-338210 --log_dir | nospam-338210 | jenkins | v1.33.1 | 12 Aug 24 10:28 UTC | 12 Aug 24 10:28 UTC |
| | /tmp/nospam-338210 unpause | | | | | |
| unpause | nospam-338210 --log_dir | nospam-338210 | jenkins | v1.33.1 | 12 Aug 24 10:28 UTC | 12 Aug 24 10:28 UTC |
| | /tmp/nospam-338210 unpause | | | | | |
| stop | nospam-338210 --log_dir | nospam-338210 | jenkins | v1.33.1 | 12 Aug 24 10:28 UTC | 12 Aug 24 10:29 UTC |
| | /tmp/nospam-338210 stop | | | | | |
| stop | nospam-338210 --log_dir | nospam-338210 | jenkins | v1.33.1 | 12 Aug 24 10:29 UTC | 12 Aug 24 10:29 UTC |
| | /tmp/nospam-338210 stop | | | | | |
| stop | nospam-338210 --log_dir | nospam-338210 | jenkins | v1.33.1 | 12 Aug 24 10:29 UTC | 12 Aug 24 10:29 UTC |
| | /tmp/nospam-338210 stop | | | | | |
| delete | -p nospam-338210 | nospam-338210 | jenkins | v1.33.1 | 12 Aug 24 10:29 UTC | 12 Aug 24 10:29 UTC |
| start | -p functional-470148 | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:29 UTC | 12 Aug 24 10:30 UTC |
| | --memory=4000 | | | | | |
| | --apiserver-port=8441 | | | | | |
| | --wait=all --driver=kvm2 | | | | | |
| start | -p functional-470148 | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:30 UTC | 12 Aug 24 10:31 UTC |
| | --alsologtostderr -v=8 | | | | | |
| cache | functional-470148 cache add | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
| | registry.k8s.io/pause:3.1 | | | | | |
| cache | functional-470148 cache add | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
| | registry.k8s.io/pause:3.3 | | | | | |
| cache | functional-470148 cache add | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
| | registry.k8s.io/pause:latest | | | | | |
| cache | functional-470148 cache add | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
| | minikube-local-cache-test:functional-470148 | | | | | |
| cache | functional-470148 cache delete | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
| | minikube-local-cache-test:functional-470148 | | | | | |
| cache | delete | minikube | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
| | registry.k8s.io/pause:3.3 | | | | | |
| cache | list | minikube | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
| ssh | functional-470148 ssh sudo | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
| | crictl images | | | | | |
| ssh | functional-470148 | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
| | ssh sudo docker rmi | | | | | |
| | registry.k8s.io/pause:latest | | | | | |
| ssh | functional-470148 ssh | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | |
| | sudo crictl inspecti | | | | | |
| | registry.k8s.io/pause:latest | | | | | |
| cache | functional-470148 cache reload | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
| ssh | functional-470148 ssh | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
| | sudo crictl inspecti | | | | | |
| | registry.k8s.io/pause:latest | | | | | |
| cache | delete | minikube | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
| | registry.k8s.io/pause:3.1 | | | | | |
| cache | delete | minikube | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
| | registry.k8s.io/pause:latest | | | | | |
| kubectl | functional-470148 kubectl -- | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:31 UTC |
| | --context functional-470148 | | | | | |
| | get pods | | | | | |
| start | -p functional-470148 | functional-470148 | jenkins | v1.33.1 | 12 Aug 24 10:31 UTC | 12 Aug 24 10:33 UTC |
| | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision | | | | | |
| | --wait=all | | | | | |
|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/08/12 10:31:49
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.22.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0812 10:31:49.324897 18189 out.go:291] Setting OutFile to fd 1 ...
I0812 10:31:49.325160 18189 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:31:49.325171 18189 out.go:304] Setting ErrFile to fd 2...
I0812 10:31:49.325175 18189 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 10:31:49.325334 18189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19409-3796/.minikube/bin
I0812 10:31:49.325852 18189 out.go:298] Setting JSON to false
I0812 10:31:49.326796 18189 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":857,"bootTime":1723457852,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0812 10:31:49.326855 18189 start.go:139] virtualization: kvm guest
I0812 10:31:49.328836 18189 out.go:177] * [functional-470148] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
I0812 10:31:49.330695 18189 out.go:177] - MINIKUBE_LOCATION=19409
I0812 10:31:49.330747 18189 notify.go:220] Checking for updates...
I0812 10:31:49.333186 18189 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0812 10:31:49.334411 18189 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19409-3796/kubeconfig
I0812 10:31:49.335670 18189 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19409-3796/.minikube
I0812 10:31:49.336895 18189 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0812 10:31:49.338020 18189 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0812 10:31:49.339647 18189 config.go:182] Loaded profile config "functional-470148": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 10:31:49.339726 18189 driver.go:392] Setting default libvirt URI to qemu:///system
I0812 10:31:49.340160 18189 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0812 10:31:49.340220 18189 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:31:49.354920 18189 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45375
I0812 10:31:49.355331 18189 main.go:141] libmachine: () Calling .GetVersion
I0812 10:31:49.355906 18189 main.go:141] libmachine: Using API Version 1
I0812 10:31:49.355927 18189 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:31:49.356253 18189 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:31:49.356436 18189 main.go:141] libmachine: (functional-470148) Calling .DriverName
I0812 10:31:49.388065 18189 out.go:177] * Using the kvm2 driver based on existing profile
I0812 10:31:49.389244 18189 start.go:297] selected driver: kvm2
I0812 10:31:49.389251 18189 start.go:901] validating driver "kvm2" against &{Name:functional-470148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-470148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0812 10:31:49.389339 18189 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0812 10:31:49.389645 18189 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0812 10:31:49.389702 18189 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19409-3796/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0812 10:31:49.404337 18189 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
I0812 10:31:49.405051 18189 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0812 10:31:49.405075 18189 cni.go:84] Creating CNI manager for ""
I0812 10:31:49.405085 18189 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0812 10:31:49.405152 18189 start.go:340] cluster config:
{Name:functional-470148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-470148 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0812 10:31:49.405243 18189 iso.go:125] acquiring lock: {Name:mk12273493f47d7003f1469d85b691a3ad57d0c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0812 10:31:49.407097 18189 out.go:177] * Starting "functional-470148" primary control-plane node in "functional-470148" cluster
I0812 10:31:49.408179 18189 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0812 10:31:49.408216 18189 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19409-3796/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
I0812 10:31:49.408223 18189 cache.go:56] Caching tarball of preloaded images
I0812 10:31:49.408325 18189 preload.go:172] Found /home/jenkins/minikube-integration/19409-3796/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0812 10:31:49.408335 18189 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0812 10:31:49.408428 18189 profile.go:143] Saving config to /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/config.json ...
I0812 10:31:49.408610 18189 start.go:360] acquireMachinesLock for functional-470148: {Name:mkd191140573e797c993374d5c6ae46963c640c3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0812 10:31:49.408662 18189 start.go:364] duration metric: took 39.452µs to acquireMachinesLock for "functional-470148"
I0812 10:31:49.408675 18189 start.go:96] Skipping create...Using existing machine configuration
I0812 10:31:49.408680 18189 fix.go:54] fixHost starting:
I0812 10:31:49.408955 18189 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0812 10:31:49.408990 18189 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:31:49.423508 18189 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41525
I0812 10:31:49.423924 18189 main.go:141] libmachine: () Calling .GetVersion
I0812 10:31:49.424399 18189 main.go:141] libmachine: Using API Version 1
I0812 10:31:49.424420 18189 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:31:49.424712 18189 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:31:49.424864 18189 main.go:141] libmachine: (functional-470148) Calling .DriverName
I0812 10:31:49.424989 18189 main.go:141] libmachine: (functional-470148) Calling .GetState
I0812 10:31:49.426548 18189 fix.go:112] recreateIfNeeded on functional-470148: state=Running err=<nil>
W0812 10:31:49.426563 18189 fix.go:138] unexpected machine state, will restart: <nil>
I0812 10:31:49.428304 18189 out.go:177] * Updating the running kvm2 "functional-470148" VM ...
I0812 10:31:49.429490 18189 machine.go:94] provisionDockerMachine start ...
I0812 10:31:49.429504 18189 main.go:141] libmachine: (functional-470148) Calling .DriverName
I0812 10:31:49.429707 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
I0812 10:31:49.431654 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:49.431981 18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
I0812 10:31:49.432003 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:49.432120 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
I0812 10:31:49.432262 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:31:49.432435 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:31:49.432571 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
I0812 10:31:49.432762 18189 main.go:141] libmachine: Using SSH client type: native
I0812 10:31:49.432931 18189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.217 22 <nil> <nil>}
I0812 10:31:49.432936 18189 main.go:141] libmachine: About to run SSH command:
hostname
I0812 10:31:49.542627 18189 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-470148
I0812 10:31:49.542642 18189 main.go:141] libmachine: (functional-470148) Calling .GetMachineName
I0812 10:31:49.542958 18189 buildroot.go:166] provisioning hostname "functional-470148"
I0812 10:31:49.543021 18189 main.go:141] libmachine: (functional-470148) Calling .GetMachineName
I0812 10:31:49.543236 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
I0812 10:31:49.546008 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:49.546359 18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
I0812 10:31:49.546380 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:49.546531 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
I0812 10:31:49.546691 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:31:49.546805 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:31:49.546910 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
I0812 10:31:49.547049 18189 main.go:141] libmachine: Using SSH client type: native
I0812 10:31:49.547244 18189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.217 22 <nil> <nil>}
I0812 10:31:49.547254 18189 main.go:141] libmachine: About to run SSH command:
sudo hostname functional-470148 && echo "functional-470148" | sudo tee /etc/hostname
I0812 10:31:49.674258 18189 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-470148
I0812 10:31:49.674277 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
I0812 10:31:49.677173 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:49.677662 18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
I0812 10:31:49.677684 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:49.678004 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
I0812 10:31:49.678243 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:31:49.678480 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:31:49.678730 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
I0812 10:31:49.678940 18189 main.go:141] libmachine: Using SSH client type: native
I0812 10:31:49.679137 18189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.217 22 <nil> <nil>}
I0812 10:31:49.679148 18189 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-470148' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-470148/g' /etc/hosts;
else
echo '127.0.1.1 functional-470148' | sudo tee -a /etc/hosts;
fi
fi
I0812 10:31:49.791345 18189 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0812 10:31:49.791362 18189 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19409-3796/.minikube CaCertPath:/home/jenkins/minikube-integration/19409-3796/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19409-3796/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19409-3796/.minikube}
I0812 10:31:49.791413 18189 buildroot.go:174] setting up certificates
I0812 10:31:49.791424 18189 provision.go:84] configureAuth start
I0812 10:31:49.791432 18189 main.go:141] libmachine: (functional-470148) Calling .GetMachineName
I0812 10:31:49.791733 18189 main.go:141] libmachine: (functional-470148) Calling .GetIP
I0812 10:31:49.794371 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:49.794679 18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
I0812 10:31:49.794701 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:49.794820 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
I0812 10:31:49.796847 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:49.797142 18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
I0812 10:31:49.797165 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:49.797257 18189 provision.go:143] copyHostCerts
I0812 10:31:49.797321 18189 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3796/.minikube/ca.pem, removing ...
I0812 10:31:49.797326 18189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3796/.minikube/ca.pem
I0812 10:31:49.797397 18189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3796/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19409-3796/.minikube/ca.pem (1078 bytes)
I0812 10:31:49.797498 18189 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3796/.minikube/cert.pem, removing ...
I0812 10:31:49.797502 18189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3796/.minikube/cert.pem
I0812 10:31:49.797527 18189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3796/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19409-3796/.minikube/cert.pem (1123 bytes)
I0812 10:31:49.797585 18189 exec_runner.go:144] found /home/jenkins/minikube-integration/19409-3796/.minikube/key.pem, removing ...
I0812 10:31:49.797588 18189 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19409-3796/.minikube/key.pem
I0812 10:31:49.797607 18189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19409-3796/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19409-3796/.minikube/key.pem (1679 bytes)
I0812 10:31:49.797660 18189 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19409-3796/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19409-3796/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19409-3796/.minikube/certs/ca-key.pem org=jenkins.functional-470148 san=[127.0.0.1 192.168.39.217 functional-470148 localhost minikube]
I0812 10:31:50.182597 18189 provision.go:177] copyRemoteCerts
I0812 10:31:50.182645 18189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0812 10:31:50.182679 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
I0812 10:31:50.186066 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:50.186332 18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
I0812 10:31:50.186354 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:50.186542 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
I0812 10:31:50.186758 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:31:50.186897 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
I0812 10:31:50.187012 18189 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/functional-470148/id_rsa Username:docker}
I0812 10:31:50.285851 18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0812 10:31:50.322845 18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0812 10:31:50.354164 18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0812 10:31:50.394061 18189 provision.go:87] duration metric: took 602.59553ms to configureAuth
I0812 10:31:50.394082 18189 buildroot.go:189] setting minikube options for container-runtime
I0812 10:31:50.394290 18189 config.go:182] Loaded profile config "functional-470148": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 10:31:50.394325 18189 main.go:141] libmachine: (functional-470148) Calling .DriverName
I0812 10:31:50.394636 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
I0812 10:31:50.397240 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:50.397628 18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
I0812 10:31:50.397652 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:50.397805 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
I0812 10:31:50.398012 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:31:50.398165 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:31:50.398289 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
I0812 10:31:50.398414 18189 main.go:141] libmachine: Using SSH client type: native
I0812 10:31:50.398613 18189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.217 22 <nil> <nil>}
I0812 10:31:50.398619 18189 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0812 10:31:50.524165 18189 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0812 10:31:50.524180 18189 buildroot.go:70] root file system type: tmpfs
I0812 10:31:50.524277 18189 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0812 10:31:50.524289 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
I0812 10:31:50.526935 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:50.527187 18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
I0812 10:31:50.527208 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:50.527413 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
I0812 10:31:50.527622 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:31:50.527816 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:31:50.527990 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
I0812 10:31:50.528143 18189 main.go:141] libmachine: Using SSH client type: native
I0812 10:31:50.528325 18189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.217 22 <nil> <nil>}
I0812 10:31:50.528378 18189 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0812 10:31:50.657878 18189 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0812 10:31:50.657909 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
I0812 10:31:50.660749 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:50.661164 18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
I0812 10:31:50.661188 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:50.661364 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
I0812 10:31:50.661564 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:31:50.661712 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:31:50.661843 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
I0812 10:31:50.661965 18189 main.go:141] libmachine: Using SSH client type: native
I0812 10:31:50.662173 18189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.217 22 <nil> <nil>}
I0812 10:31:50.662184 18189 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0812 10:31:50.790209 18189 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0812 10:31:50.790225 18189 machine.go:97] duration metric: took 1.360727547s to provisionDockerMachine
I0812 10:31:50.790236 18189 start.go:293] postStartSetup for "functional-470148" (driver="kvm2")
I0812 10:31:50.790245 18189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0812 10:31:50.790267 18189 main.go:141] libmachine: (functional-470148) Calling .DriverName
I0812 10:31:50.790633 18189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0812 10:31:50.790662 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
I0812 10:31:50.795653 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:50.796211 18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
I0812 10:31:50.796227 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:50.796625 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
I0812 10:31:50.796952 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:31:50.797278 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
I0812 10:31:50.797494 18189 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/functional-470148/id_rsa Username:docker}
I0812 10:31:50.886234 18189 ssh_runner.go:195] Run: cat /etc/os-release
I0812 10:31:50.891780 18189 info.go:137] Remote host: Buildroot 2023.02.9
I0812 10:31:50.891805 18189 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3796/.minikube/addons for local assets ...
I0812 10:31:50.891939 18189 filesync.go:126] Scanning /home/jenkins/minikube-integration/19409-3796/.minikube/files for local assets ...
I0812 10:31:50.892017 18189 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3796/.minikube/files/etc/ssl/certs/109682.pem -> 109682.pem in /etc/ssl/certs
I0812 10:31:50.892089 18189 filesync.go:149] local asset: /home/jenkins/minikube-integration/19409-3796/.minikube/files/etc/test/nested/copy/10968/hosts -> hosts in /etc/test/nested/copy/10968
I0812 10:31:50.892140 18189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/10968
I0812 10:31:50.905798 18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/files/etc/ssl/certs/109682.pem --> /etc/ssl/certs/109682.pem (1708 bytes)
I0812 10:31:50.940801 18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/files/etc/test/nested/copy/10968/hosts --> /etc/test/nested/copy/10968/hosts (40 bytes)
I0812 10:31:50.974998 18189 start.go:296] duration metric: took 184.748014ms for postStartSetup
I0812 10:31:50.975030 18189 fix.go:56] duration metric: took 1.566350639s for fixHost
I0812 10:31:50.975049 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
I0812 10:31:50.977953 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:50.978460 18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
I0812 10:31:50.978484 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:50.978696 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
I0812 10:31:50.978895 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:31:50.979045 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:31:50.979195 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
I0812 10:31:50.979353 18189 main.go:141] libmachine: Using SSH client type: native
I0812 10:31:50.979516 18189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.217 22 <nil> <nil>}
I0812 10:31:50.979522 18189 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0812 10:31:51.095978 18189 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723458711.073155213
I0812 10:31:51.095994 18189 fix.go:216] guest clock: 1723458711.073155213
I0812 10:31:51.096003 18189 fix.go:229] Guest: 2024-08-12 10:31:51.073155213 +0000 UTC Remote: 2024-08-12 10:31:50.975032818 +0000 UTC m=+1.686663349 (delta=98.122395ms)
I0812 10:31:51.096052 18189 fix.go:200] guest clock delta is within tolerance: 98.122395ms
I0812 10:31:51.096057 18189 start.go:83] releasing machines lock for "functional-470148", held for 1.687388646s
I0812 10:31:51.096074 18189 main.go:141] libmachine: (functional-470148) Calling .DriverName
I0812 10:31:51.096332 18189 main.go:141] libmachine: (functional-470148) Calling .GetIP
I0812 10:31:51.099313 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:51.099686 18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
I0812 10:31:51.099711 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:51.099933 18189 main.go:141] libmachine: (functional-470148) Calling .DriverName
I0812 10:31:51.100581 18189 main.go:141] libmachine: (functional-470148) Calling .DriverName
I0812 10:31:51.100755 18189 main.go:141] libmachine: (functional-470148) Calling .DriverName
I0812 10:31:51.100846 18189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0812 10:31:51.100881 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
I0812 10:31:51.101019 18189 ssh_runner.go:195] Run: cat /version.json
I0812 10:31:51.101037 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
I0812 10:31:51.103680 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:51.103984 18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
I0812 10:31:51.104007 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:51.104043 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:51.104176 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
I0812 10:31:51.104396 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:31:51.104442 18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
I0812 10:31:51.104458 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:31:51.104562 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
I0812 10:31:51.104566 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
I0812 10:31:51.104704 18189 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/functional-470148/id_rsa Username:docker}
I0812 10:31:51.104910 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:31:51.105113 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
I0812 10:31:51.105300 18189 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/functional-470148/id_rsa Username:docker}
I0812 10:31:51.208611 18189 ssh_runner.go:195] Run: systemctl --version
I0812 10:31:51.216618 18189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0812 10:31:51.223237 18189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0812 10:31:51.223301 18189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0812 10:31:51.236127 18189 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0812 10:31:51.236147 18189 start.go:495] detecting cgroup driver to use...
I0812 10:31:51.236285 18189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0812 10:31:51.258092 18189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0812 10:31:51.270753 18189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0812 10:31:51.288025 18189 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0812 10:31:51.288100 18189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0812 10:31:51.301819 18189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0812 10:31:51.314767 18189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0812 10:31:51.328640 18189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0812 10:31:51.342558 18189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0812 10:31:51.355537 18189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0812 10:31:51.369032 18189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0812 10:31:51.382663 18189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0812 10:31:51.396374 18189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0812 10:31:51.407743 18189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0812 10:31:51.420076 18189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0812 10:31:51.610870 18189 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0812 10:31:51.641354 18189 start.go:495] detecting cgroup driver to use...
I0812 10:31:51.641450 18189 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0812 10:31:51.662164 18189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0812 10:31:51.681893 18189 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0812 10:31:51.704062 18189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0812 10:31:51.721662 18189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0812 10:31:51.738746 18189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0812 10:31:51.762088 18189 ssh_runner.go:195] Run: which cri-dockerd
I0812 10:31:51.766902 18189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0812 10:31:51.788580 18189 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0812 10:31:51.810629 18189 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0812 10:31:51.999368 18189 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0812 10:31:52.167938 18189 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0812 10:31:52.168095 18189 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0812 10:31:52.206776 18189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0812 10:31:52.370822 18189 ssh_runner.go:195] Run: sudo systemctl restart docker
I0812 10:32:05.144143 18189 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.77329475s)
I0812 10:32:05.144204 18189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0812 10:32:05.172772 18189 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
I0812 10:32:05.198016 18189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0812 10:32:05.214736 18189 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0812 10:32:05.350641 18189 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0812 10:32:05.490156 18189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0812 10:32:05.622675 18189 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0812 10:32:05.642455 18189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0812 10:32:05.657762 18189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0812 10:32:05.788401 18189 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0812 10:32:05.907094 18189 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0812 10:32:05.907154 18189 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0812 10:32:05.914368 18189 start.go:563] Will wait 60s for crictl version
I0812 10:32:05.914440 18189 ssh_runner.go:195] Run: which crictl
I0812 10:32:05.919064 18189 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0812 10:32:05.958060 18189 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.1.1
RuntimeApiVersion: v1
I0812 10:32:05.958144 18189 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0812 10:32:05.988263 18189 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0812 10:32:06.017116 18189 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
I0812 10:32:06.017162 18189 main.go:141] libmachine: (functional-470148) Calling .GetIP
I0812 10:32:06.020221 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:32:06.020577 18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
I0812 10:32:06.020614 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:32:06.020902 18189 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0812 10:32:06.027401 18189 out.go:177] - apiserver.enable-admission-plugins=NamespaceAutoProvision
I0812 10:32:06.028727 18189 kubeadm.go:883] updating cluster {Name:functional-470148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:functional-470148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0812 10:32:06.028855 18189 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0812 10:32:06.028914 18189 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0812 10:32:06.049585 18189 docker.go:685] Got preloaded images: -- stdout --
minikube-local-cache-test:functional-470148
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/pause:latest
-- /stdout --
I0812 10:32:06.049598 18189 docker.go:615] Images already preloaded, skipping extraction
I0812 10:32:06.049665 18189 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0812 10:32:06.070440 18189 docker.go:685] Got preloaded images: -- stdout --
minikube-local-cache-test:functional-470148
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/pause:latest
-- /stdout --
I0812 10:32:06.070455 18189 cache_images.go:84] Images are preloaded, skipping loading
I0812 10:32:06.070467 18189 kubeadm.go:934] updating node { 192.168.39.217 8441 v1.30.3 docker true true} ...
I0812 10:32:06.070597 18189 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-470148 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
[Install]
config:
{KubernetesVersion:v1.30.3 ClusterName:functional-470148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0812 10:32:06.070666 18189 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0812 10:32:06.140616 18189 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
I0812 10:32:06.140724 18189 cni.go:84] Creating CNI manager for ""
I0812 10:32:06.140749 18189 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0812 10:32:06.140823 18189 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0812 10:32:06.140900 18189 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8441 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-470148 NodeName:functional-470148 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0812 10:32:06.141140 18189 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.217
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "functional-470148"
kubeletExtraArgs:
node-ip: 192.168.39.217
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
extraArgs:
enable-admission-plugins: "NamespaceAutoProvision"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.30.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0812 10:32:06.141284 18189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
I0812 10:32:06.152886 18189 binaries.go:44] Found k8s binaries, skipping transfer
I0812 10:32:06.152951 18189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0812 10:32:06.163458 18189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
I0812 10:32:06.183318 18189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0812 10:32:06.203078 18189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2015 bytes)
I0812 10:32:06.224599 18189 ssh_runner.go:195] Run: grep 192.168.39.217 control-plane.minikube.internal$ /etc/hosts
I0812 10:32:06.229358 18189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0812 10:32:06.357393 18189 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0812 10:32:06.373737 18189 certs.go:68] Setting up /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148 for IP: 192.168.39.217
I0812 10:32:06.373753 18189 certs.go:194] generating shared ca certs ...
I0812 10:32:06.373773 18189 certs.go:226] acquiring lock for ca certs: {Name:mkadbb95e03b53e6a3c34b2efd2db9368412cbc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0812 10:32:06.373942 18189 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19409-3796/.minikube/ca.key
I0812 10:32:06.373986 18189 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19409-3796/.minikube/proxy-client-ca.key
I0812 10:32:06.373992 18189 certs.go:256] generating profile certs ...
I0812 10:32:06.374103 18189 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/client.key
I0812 10:32:06.374158 18189 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/apiserver.key.883b791d
I0812 10:32:06.374193 18189 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/proxy-client.key
I0812 10:32:06.374308 18189 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3796/.minikube/certs/10968.pem (1338 bytes)
W0812 10:32:06.374333 18189 certs.go:480] ignoring /home/jenkins/minikube-integration/19409-3796/.minikube/certs/10968_empty.pem, impossibly tiny 0 bytes
I0812 10:32:06.374339 18189 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3796/.minikube/certs/ca-key.pem (1679 bytes)
I0812 10:32:06.374357 18189 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3796/.minikube/certs/ca.pem (1078 bytes)
I0812 10:32:06.374377 18189 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3796/.minikube/certs/cert.pem (1123 bytes)
I0812 10:32:06.374400 18189 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3796/.minikube/certs/key.pem (1679 bytes)
I0812 10:32:06.374434 18189 certs.go:484] found cert: /home/jenkins/minikube-integration/19409-3796/.minikube/files/etc/ssl/certs/109682.pem (1708 bytes)
I0812 10:32:06.375049 18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0812 10:32:06.401316 18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0812 10:32:06.426749 18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0812 10:32:06.453208 18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0812 10:32:06.479475 18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0812 10:32:06.507750 18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0812 10:32:06.534244 18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0812 10:32:06.562226 18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/profiles/functional-470148/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0812 10:32:06.589396 18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/certs/10968.pem --> /usr/share/ca-certificates/10968.pem (1338 bytes)
I0812 10:32:06.622864 18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/files/etc/ssl/certs/109682.pem --> /usr/share/ca-certificates/109682.pem (1708 bytes)
I0812 10:32:06.654544 18189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19409-3796/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0812 10:32:06.684822 18189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0812 10:32:06.705593 18189 ssh_runner.go:195] Run: openssl version
I0812 10:32:06.712399 18189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10968.pem && ln -fs /usr/share/ca-certificates/10968.pem /etc/ssl/certs/10968.pem"
I0812 10:32:06.725189 18189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10968.pem
I0812 10:32:06.730272 18189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 10:29 /usr/share/ca-certificates/10968.pem
I0812 10:32:06.730321 18189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10968.pem
I0812 10:32:06.737236 18189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10968.pem /etc/ssl/certs/51391683.0"
I0812 10:32:06.748481 18189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109682.pem && ln -fs /usr/share/ca-certificates/109682.pem /etc/ssl/certs/109682.pem"
I0812 10:32:06.760868 18189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109682.pem
I0812 10:32:06.766375 18189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 10:29 /usr/share/ca-certificates/109682.pem
I0812 10:32:06.766428 18189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109682.pem
I0812 10:32:06.773277 18189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109682.pem /etc/ssl/certs/3ec20f2e.0"
I0812 10:32:06.784269 18189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0812 10:32:06.798184 18189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0812 10:32:06.803789 18189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 10:21 /usr/share/ca-certificates/minikubeCA.pem
I0812 10:32:06.803885 18189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0812 10:32:06.810957 18189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0812 10:32:06.822279 18189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0812 10:32:06.828266 18189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0812 10:32:06.834643 18189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0812 10:32:06.841220 18189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0812 10:32:06.847703 18189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0812 10:32:06.854533 18189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0812 10:32:06.861525 18189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0812 10:32:06.868440 18189 kubeadm.go:392] StartCluster: {Name:functional-470148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:functional-470148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Moun
tString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0812 10:32:06.868572 18189 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0812 10:32:06.888257 18189 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0812 10:32:06.900277 18189 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0812 10:32:06.900289 18189 kubeadm.go:593] restartPrimaryControlPlane start ...
I0812 10:32:06.900369 18189 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0812 10:32:06.912580 18189 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0812 10:32:06.913131 18189 kubeconfig.go:125] found "functional-470148" server: "https://192.168.39.217:8441"
I0812 10:32:06.914379 18189 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0812 10:32:06.927747 18189 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml
+++ /var/tmp/minikube/kubeadm.yaml.new
@@ -22,7 +22,7 @@
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
extraArgs:
- enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
+ enable-admission-plugins: "NamespaceAutoProvision"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
-- /stdout --
I0812 10:32:06.927756 18189 kubeadm.go:1160] stopping kube-system containers ...
I0812 10:32:06.927815 18189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0812 10:32:06.952926 18189 docker.go:483] Stopping containers: [4f0c8adf0dda 1f1124951798 16616cb9ce5d 6847d5bfe08c 6cd4ba5fbd18 ba1224227c45 7bdc8c688102 a82fb1fec552 b318d7a1a722 efdfc20ff005 e46ea15b50bc 4360cfb87e38 4051e49c8f5a d6db8459618c d8221a352bff 6da3427c4816 dc11fc027362 4ee2b1cf700c ee9b9294facf 297a7221af7c 06a49bcd2956 d0e7a3e717da 8d3b18401964 61af2576b926 aabf8fa23d86 8871e6806b3a 99e71abeb7cb 50aafe7542ee 93a24bdd7dba]
I0812 10:32:06.953014 18189 ssh_runner.go:195] Run: docker stop 4f0c8adf0dda 1f1124951798 16616cb9ce5d 6847d5bfe08c 6cd4ba5fbd18 ba1224227c45 7bdc8c688102 a82fb1fec552 b318d7a1a722 efdfc20ff005 e46ea15b50bc 4360cfb87e38 4051e49c8f5a d6db8459618c d8221a352bff 6da3427c4816 dc11fc027362 4ee2b1cf700c ee9b9294facf 297a7221af7c 06a49bcd2956 d0e7a3e717da 8d3b18401964 61af2576b926 aabf8fa23d86 8871e6806b3a 99e71abeb7cb 50aafe7542ee 93a24bdd7dba
I0812 10:32:06.979499 18189 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0812 10:32:07.022821 18189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0812 10:32:07.034531 18189 kubeadm.go:157] found existing configuration files:
-rw------- 1 root root 5651 Aug 12 10:29 /etc/kubernetes/admin.conf
-rw------- 1 root root 5654 Aug 12 10:31 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2007 Aug 12 10:30 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5602 Aug 12 10:31 /etc/kubernetes/scheduler.conf
I0812 10:32:07.034586 18189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I0812 10:32:07.044679 18189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I0812 10:32:07.055419 18189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I0812 10:32:07.066575 18189 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0812 10:32:07.066629 18189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0812 10:32:07.078097 18189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I0812 10:32:07.088829 18189 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0812 10:32:07.088879 18189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0812 10:32:07.099853 18189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0812 10:32:07.110821 18189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0812 10:32:07.180503 18189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0812 10:32:07.939973 18189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0812 10:32:08.173314 18189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0812 10:32:08.277495 18189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0812 10:32:08.444493 18189 api_server.go:52] waiting for apiserver process to appear ...
I0812 10:32:08.444582 18189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0812 10:32:08.944994 18189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0812 10:32:09.444618 18189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0812 10:32:09.944782 18189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0812 10:32:09.963322 18189 api_server.go:72] duration metric: took 1.518828441s to wait for apiserver process to appear ...
I0812 10:32:09.963337 18189 api_server.go:88] waiting for apiserver healthz status ...
I0812 10:32:09.963366 18189 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
I0812 10:32:13.151731 18189 api_server.go:279] https://192.168.39.217:8441/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0812 10:32:13.151753 18189 api_server.go:103] status: https://192.168.39.217:8441/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0812 10:32:13.151776 18189 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
I0812 10:32:13.189667 18189 api_server.go:279] https://192.168.39.217:8441/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0812 10:32:13.189685 18189 api_server.go:103] status: https://192.168.39.217:8441/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0812 10:32:13.464272 18189 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
I0812 10:32:13.469514 18189 api_server.go:279] https://192.168.39.217:8441/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0812 10:32:13.469536 18189 api_server.go:103] status: https://192.168.39.217:8441/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0812 10:32:13.964362 18189 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
I0812 10:32:13.969990 18189 api_server.go:279] https://192.168.39.217:8441/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0812 10:32:13.970015 18189 api_server.go:103] status: https://192.168.39.217:8441/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0812 10:32:14.463965 18189 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
I0812 10:32:14.475459 18189 api_server.go:279] https://192.168.39.217:8441/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0812 10:32:14.475479 18189 api_server.go:103] status: https://192.168.39.217:8441/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0812 10:32:14.964148 18189 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
I0812 10:32:14.970257 18189 api_server.go:279] https://192.168.39.217:8441/healthz returned 200:
ok
I0812 10:32:14.979747 18189 api_server.go:141] control plane version: v1.30.3
I0812 10:32:14.979779 18189 api_server.go:131] duration metric: took 5.016435807s to wait for apiserver health ...
I0812 10:32:14.979859 18189 cni.go:84] Creating CNI manager for ""
I0812 10:32:14.979892 18189 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0812 10:32:14.982061 18189 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0812 10:32:14.984231 18189 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0812 10:32:14.997179 18189 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0812 10:32:15.026630 18189 system_pods.go:43] waiting for kube-system pods to appear ...
I0812 10:32:15.039873 18189 system_pods.go:59] 7 kube-system pods found
I0812 10:32:15.039899 18189 system_pods.go:61] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0812 10:32:15.039907 18189 system_pods.go:61] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0812 10:32:15.039922 18189 system_pods.go:61] "kube-apiserver-functional-470148" [c5774f60-aeeb-42e8-b996-40a18d4353a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0812 10:32:15.039930 18189 system_pods.go:61] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0812 10:32:15.039936 18189 system_pods.go:61] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
I0812 10:32:15.039943 18189 system_pods.go:61] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0812 10:32:15.039952 18189 system_pods.go:61] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0812 10:32:15.039958 18189 system_pods.go:74] duration metric: took 13.314671ms to wait for pod list to return data ...
I0812 10:32:15.039967 18189 node_conditions.go:102] verifying NodePressure condition ...
I0812 10:32:15.045779 18189 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0812 10:32:15.045794 18189 node_conditions.go:123] node cpu capacity is 2
I0812 10:32:15.045804 18189 node_conditions.go:105] duration metric: took 5.833898ms to run NodePressure ...
I0812 10:32:15.045823 18189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0812 10:32:15.448095 18189 kubeadm.go:724] waiting for restarted kubelet to initialise ...
I0812 10:32:15.466840 18189 kubeadm.go:739] kubelet initialised
I0812 10:32:15.466853 18189 kubeadm.go:740] duration metric: took 18.734289ms waiting for restarted kubelet to initialise ...
I0812 10:32:15.466861 18189 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0812 10:32:15.476331 18189 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kvjbq" in "kube-system" namespace to be "Ready" ...
I0812 10:32:17.484174 18189 pod_ready.go:102] pod "coredns-7db6d8ff4d-kvjbq" in "kube-system" namespace has status "Ready":"False"
I0812 10:32:17.983220 18189 pod_ready.go:92] pod "coredns-7db6d8ff4d-kvjbq" in "kube-system" namespace has status "Ready":"True"
I0812 10:32:17.983234 18189 pod_ready.go:81] duration metric: took 2.506888368s for pod "coredns-7db6d8ff4d-kvjbq" in "kube-system" namespace to be "Ready" ...
I0812 10:32:17.983245 18189 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-470148" in "kube-system" namespace to be "Ready" ...
I0812 10:32:19.990519 18189 pod_ready.go:102] pod "etcd-functional-470148" in "kube-system" namespace has status "Ready":"False"
I0812 10:32:21.991085 18189 pod_ready.go:102] pod "etcd-functional-470148" in "kube-system" namespace has status "Ready":"False"
I0812 10:32:22.992540 18189 pod_ready.go:92] pod "etcd-functional-470148" in "kube-system" namespace has status "Ready":"True"
I0812 10:32:22.992554 18189 pod_ready.go:81] duration metric: took 5.009302185s for pod "etcd-functional-470148" in "kube-system" namespace to be "Ready" ...
I0812 10:32:22.992565 18189 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-470148" in "kube-system" namespace to be "Ready" ...
I0812 10:32:24.001002 18189 pod_ready.go:92] pod "kube-apiserver-functional-470148" in "kube-system" namespace has status "Ready":"True"
I0812 10:32:24.001020 18189 pod_ready.go:81] duration metric: took 1.008442945s for pod "kube-apiserver-functional-470148" in "kube-system" namespace to be "Ready" ...
I0812 10:32:24.001029 18189 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-470148" in "kube-system" namespace to be "Ready" ...
I0812 10:32:26.008279 18189 pod_ready.go:102] pod "kube-controller-manager-functional-470148" in "kube-system" namespace has status "Ready":"False"
I0812 10:32:27.008234 18189 pod_ready.go:92] pod "kube-controller-manager-functional-470148" in "kube-system" namespace has status "Ready":"True"
I0812 10:32:27.008246 18189 pod_ready.go:81] duration metric: took 3.007211622s for pod "kube-controller-manager-functional-470148" in "kube-system" namespace to be "Ready" ...
I0812 10:32:27.008256 18189 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xmv5n" in "kube-system" namespace to be "Ready" ...
I0812 10:32:27.013647 18189 pod_ready.go:92] pod "kube-proxy-xmv5n" in "kube-system" namespace has status "Ready":"True"
I0812 10:32:27.013657 18189 pod_ready.go:81] duration metric: took 5.395908ms for pod "kube-proxy-xmv5n" in "kube-system" namespace to be "Ready" ...
I0812 10:32:27.013663 18189 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-470148" in "kube-system" namespace to be "Ready" ...
I0812 10:32:28.515351 18189 pod_ready.go:97] error getting pod "kube-scheduler-functional-470148" in "kube-system" namespace (skipping!): Get "https://192.168.39.217:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
I0812 10:32:28.515376 18189 pod_ready.go:81] duration metric: took 1.501705591s for pod "kube-scheduler-functional-470148" in "kube-system" namespace to be "Ready" ...
E0812 10:32:28.515387 18189 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-470148" in "kube-system" namespace (skipping!): Get "https://192.168.39.217:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
I0812 10:32:28.515410 18189 pod_ready.go:38] duration metric: took 13.048540046s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0812 10:32:28.515429 18189 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0812 10:32:28.533733 18189 ops.go:34] apiserver oom_adj: -16
I0812 10:32:28.533746 18189 kubeadm.go:597] duration metric: took 21.633452504s to restartPrimaryControlPlane
I0812 10:32:28.533755 18189 kubeadm.go:394] duration metric: took 21.665330355s to StartCluster
I0812 10:32:28.533772 18189 settings.go:142] acquiring lock: {Name:mkba5c2b975cd0b8bdc203e1abd117d5ce4dcc08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0812 10:32:28.533857 18189 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19409-3796/kubeconfig
I0812 10:32:28.534632 18189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19409-3796/kubeconfig: {Name:mk907d76af9966fcc783a1f0e0b3b2a7c51b6dcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0812 10:32:28.534890 18189 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.217 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0812 10:32:28.534954 18189 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0812 10:32:28.535029 18189 addons.go:69] Setting storage-provisioner=true in profile "functional-470148"
I0812 10:32:28.535053 18189 addons.go:234] Setting addon storage-provisioner=true in "functional-470148"
W0812 10:32:28.535057 18189 addons.go:243] addon storage-provisioner should already be in state true
I0812 10:32:28.535048 18189 addons.go:69] Setting default-storageclass=true in profile "functional-470148"
I0812 10:32:28.535079 18189 host.go:66] Checking if "functional-470148" exists ...
I0812 10:32:28.535084 18189 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-470148"
I0812 10:32:28.535117 18189 config.go:182] Loaded profile config "functional-470148": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0812 10:32:28.535398 18189 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0812 10:32:28.535428 18189 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:32:28.535431 18189 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0812 10:32:28.535451 18189 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:32:28.536728 18189 out.go:177] * Verifying Kubernetes components...
I0812 10:32:28.538138 18189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0812 10:32:28.551807 18189 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36735
I0812 10:32:28.551811 18189 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43415
I0812 10:32:28.552315 18189 main.go:141] libmachine: () Calling .GetVersion
I0812 10:32:28.552406 18189 main.go:141] libmachine: () Calling .GetVersion
I0812 10:32:28.552931 18189 main.go:141] libmachine: Using API Version 1
I0812 10:32:28.552936 18189 main.go:141] libmachine: Using API Version 1
I0812 10:32:28.552942 18189 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:32:28.552949 18189 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:32:28.553278 18189 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:32:28.553361 18189 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:32:28.553532 18189 main.go:141] libmachine: (functional-470148) Calling .GetState
I0812 10:32:28.553789 18189 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0812 10:32:28.553821 18189 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:32:28.556264 18189 addons.go:234] Setting addon default-storageclass=true in "functional-470148"
W0812 10:32:28.556274 18189 addons.go:243] addon default-storageclass should already be in state true
I0812 10:32:28.556303 18189 host.go:66] Checking if "functional-470148" exists ...
I0812 10:32:28.556663 18189 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0812 10:32:28.556701 18189 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:32:28.569667 18189 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39217
I0812 10:32:28.570116 18189 main.go:141] libmachine: () Calling .GetVersion
I0812 10:32:28.570545 18189 main.go:141] libmachine: Using API Version 1
I0812 10:32:28.570555 18189 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:32:28.570868 18189 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:32:28.571014 18189 main.go:141] libmachine: (functional-470148) Calling .GetState
I0812 10:32:28.572704 18189 main.go:141] libmachine: (functional-470148) Calling .DriverName
I0812 10:32:28.574870 18189 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0812 10:32:28.575056 18189 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46869
I0812 10:32:28.575475 18189 main.go:141] libmachine: () Calling .GetVersion
I0812 10:32:28.576017 18189 main.go:141] libmachine: Using API Version 1
I0812 10:32:28.576033 18189 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:32:28.576154 18189 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0812 10:32:28.576165 18189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0812 10:32:28.576182 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
I0812 10:32:28.576402 18189 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:32:28.576993 18189 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0812 10:32:28.577021 18189 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 10:32:28.579246 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:32:28.579663 18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
I0812 10:32:28.579678 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:32:28.579867 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
I0812 10:32:28.579987 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:32:28.580084 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
I0812 10:32:28.580151 18189 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/functional-470148/id_rsa Username:docker}
I0812 10:32:28.596594 18189 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37857
I0812 10:32:28.597025 18189 main.go:141] libmachine: () Calling .GetVersion
I0812 10:32:28.597609 18189 main.go:141] libmachine: Using API Version 1
I0812 10:32:28.597623 18189 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 10:32:28.597945 18189 main.go:141] libmachine: () Calling .GetMachineName
I0812 10:32:28.598208 18189 main.go:141] libmachine: (functional-470148) Calling .GetState
I0812 10:32:28.600163 18189 main.go:141] libmachine: (functional-470148) Calling .DriverName
I0812 10:32:28.600416 18189 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0812 10:32:28.600426 18189 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0812 10:32:28.600446 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHHostname
I0812 10:32:28.603256 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:32:28.603744 18189 main.go:141] libmachine: (functional-470148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:38:04", ip: ""} in network mk-functional-470148: {Iface:virbr1 ExpiryTime:2024-08-12 11:29:27 +0000 UTC Type:0 Mac:52:54:00:f9:38:04 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-470148 Clientid:01:52:54:00:f9:38:04}
I0812 10:32:28.603769 18189 main.go:141] libmachine: (functional-470148) DBG | domain functional-470148 has defined IP address 192.168.39.217 and MAC address 52:54:00:f9:38:04 in network mk-functional-470148
I0812 10:32:28.603990 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHPort
I0812 10:32:28.604199 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHKeyPath
I0812 10:32:28.604348 18189 main.go:141] libmachine: (functional-470148) Calling .GetSSHUsername
I0812 10:32:28.604477 18189 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19409-3796/.minikube/machines/functional-470148/id_rsa Username:docker}
I0812 10:32:28.737151 18189 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0812 10:32:28.754776 18189 node_ready.go:35] waiting up to 6m0s for node "functional-470148" to be "Ready" ...
I0812 10:32:28.830404 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
W0812 10:32:28.899372 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:28.899400 18189 retry.go:31] will retry after 267.115997ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:28.933502 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
W0812 10:32:29.008511 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:29.008537 18189 retry.go:31] will retry after 359.254435ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:29.166849 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0812 10:32:29.230500 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:29.230522 18189 retry.go:31] will retry after 512.327925ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:29.368659 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0812 10:32:29.433351 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:29.433376 18189 retry.go:31] will retry after 335.410572ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:29.743890 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0812 10:32:29.769326 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0812 10:32:29.825487 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:29.825516 18189 retry.go:31] will retry after 383.088186ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
W0812 10:32:29.856120 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:29.856150 18189 retry.go:31] will retry after 725.222424ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:30.209632 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0812 10:32:30.280949 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:30.280977 18189 retry.go:31] will retry after 1.187875626s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:30.582441 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0812 10:32:30.646307 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:30.646342 18189 retry.go:31] will retry after 532.861209ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:30.756121 18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
I0812 10:32:31.179647 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0812 10:32:31.250739 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:31.250773 18189 retry.go:31] will retry after 899.135469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:31.469001 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0812 10:32:31.538660 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:31.538689 18189 retry.go:31] will retry after 1.408200519s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:32.150259 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0812 10:32:32.214257 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:32.214283 18189 retry.go:31] will retry after 1.78359862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:32.947872 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0812 10:32:33.018991 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:33.019017 18189 retry.go:31] will retry after 2.821630245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:33.256056 18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
I0812 10:32:33.998465 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0812 10:32:34.075499 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:34.075528 18189 retry.go:31] will retry after 2.344837357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:35.756340 18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
I0812 10:32:35.841574 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0812 10:32:35.923563 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:35.923590 18189 retry.go:31] will retry after 1.672401183s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:36.421118 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0812 10:32:36.489991 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:36.490012 18189 retry.go:31] will retry after 3.815744723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:37.596156 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0812 10:32:37.667874 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:37.667902 18189 retry.go:31] will retry after 5.828338709s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:38.256017 18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
I0812 10:32:40.306181 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0812 10:32:40.374664 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:40.374687 18189 retry.go:31] will retry after 3.82366058s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:40.755745 18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
I0812 10:32:42.755778 18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
I0812 10:32:43.496565 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0812 10:32:43.570163 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:43.570191 18189 retry.go:31] will retry after 8.107200931s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:44.198557 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0812 10:32:44.261628 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:44.261648 18189 retry.go:31] will retry after 6.162963503s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:44.756248 18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
I0812 10:32:47.255547 18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
I0812 10:32:49.756380 18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
I0812 10:32:50.424944 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0812 10:32:50.487049 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:50.487072 18189 retry.go:31] will retry after 8.807074684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:51.677709 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0812 10:32:51.752116 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:51.752139 18189 retry.go:31] will retry after 9.888706894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:52.256335 18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
I0812 10:32:54.756360 18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
I0812 10:32:57.255713 18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
I0812 10:32:59.256552 18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
I0812 10:32:59.294797 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0812 10:32:59.364950 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:32:59.364978 18189 retry.go:31] will retry after 23.085905643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:33:01.641184 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0812 10:33:01.719045 18189 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:33:01.719067 18189 retry.go:31] will retry after 17.311771994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0812 10:33:01.755839 18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
I0812 10:33:03.756200 18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
I0812 10:33:06.255865 18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
I0812 10:33:08.256480 18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
I0812 10:33:10.756292 18189 node_ready.go:53] error getting node "functional-470148": Get "https://192.168.39.217:8441/api/v1/nodes/functional-470148": dial tcp 192.168.39.217:8441: connect: connection refused
I0812 10:33:12.513770 18189 node_ready.go:49] node "functional-470148" has status "Ready":"True"
I0812 10:33:12.513782 18189 node_ready.go:38] duration metric: took 43.758987086s for node "functional-470148" to be "Ready" ...
I0812 10:33:12.513791 18189 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0812 10:33:12.564650 18189 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kvjbq" in "kube-system" namespace to be "Ready" ...
I0812 10:33:12.637491 18189 pod_ready.go:97] node "functional-470148" hosting pod "coredns-7db6d8ff4d-kvjbq" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-470148" has status "Ready":"Unknown"
I0812 10:33:12.637505 18189 pod_ready.go:81] duration metric: took 72.843248ms for pod "coredns-7db6d8ff4d-kvjbq" in "kube-system" namespace to be "Ready" ...
E0812 10:33:12.637513 18189 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-470148" hosting pod "coredns-7db6d8ff4d-kvjbq" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-470148" has status "Ready":"Unknown"
I0812 10:33:12.637531 18189 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-470148" in "kube-system" namespace to be "Ready" ...
I0812 10:33:12.667977 18189 pod_ready.go:97] node "functional-470148" hosting pod "etcd-functional-470148" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-470148" has status "Ready":"Unknown"
I0812 10:33:12.667993 18189 pod_ready.go:81] duration metric: took 30.455975ms for pod "etcd-functional-470148" in "kube-system" namespace to be "Ready" ...
E0812 10:33:12.668001 18189 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-470148" hosting pod "etcd-functional-470148" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-470148" has status "Ready":"Unknown"
I0812 10:33:12.668021 18189 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-470148" in "kube-system" namespace to be "Ready" ...
I0812 10:33:12.679883 18189 pod_ready.go:97] error getting pod "kube-apiserver-functional-470148" in "kube-system" namespace (skipping!): pods "kube-apiserver-functional-470148" not found
I0812 10:33:12.679898 18189 pod_ready.go:81] duration metric: took 11.870412ms for pod "kube-apiserver-functional-470148" in "kube-system" namespace to be "Ready" ...
E0812 10:33:12.679907 18189 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-functional-470148" in "kube-system" namespace (skipping!): pods "kube-apiserver-functional-470148" not found
I0812 10:33:12.679924 18189 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-470148" in "kube-system" namespace to be "Ready" ...
I0812 10:33:12.705508 18189 pod_ready.go:97] node "functional-470148" hosting pod "kube-controller-manager-functional-470148" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-470148" has status "Ready":"Unknown"
I0812 10:33:12.705521 18189 pod_ready.go:81] duration metric: took 25.591905ms for pod "kube-controller-manager-functional-470148" in "kube-system" namespace to be "Ready" ...
E0812 10:33:12.705530 18189 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-470148" hosting pod "kube-controller-manager-functional-470148" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-470148" has status "Ready":"Unknown"
I0812 10:33:12.705546 18189 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xmv5n" in "kube-system" namespace to be "Ready" ...
I0812 10:33:12.712546 18189 pod_ready.go:97] node "functional-470148" hosting pod "kube-proxy-xmv5n" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-470148" has status "Ready":"Unknown"
I0812 10:33:12.712559 18189 pod_ready.go:81] duration metric: took 7.005502ms for pod "kube-proxy-xmv5n" in "kube-system" namespace to be "Ready" ...
E0812 10:33:12.712569 18189 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-470148" hosting pod "kube-proxy-xmv5n" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-470148" has status "Ready":"Unknown"
I0812 10:33:12.712586 18189 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-470148" in "kube-system" namespace to be "Ready" ...
I0812 10:33:12.918717 18189 pod_ready.go:97] node "functional-470148" hosting pod "kube-scheduler-functional-470148" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-470148" has status "Ready":"Unknown"
I0812 10:33:12.918729 18189 pod_ready.go:81] duration metric: took 206.138469ms for pod "kube-scheduler-functional-470148" in "kube-system" namespace to be "Ready" ...
E0812 10:33:12.918737 18189 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-470148" hosting pod "kube-scheduler-functional-470148" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-470148" has status "Ready":"Unknown"
I0812 10:33:12.918754 18189 pod_ready.go:38] duration metric: took 404.955962ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0812 10:33:12.918774 18189 api_server.go:52] waiting for apiserver process to appear ...
I0812 10:33:12.918822 18189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0812 10:33:12.936043 18189 api_server.go:72] duration metric: took 44.401129274s to wait for apiserver process to appear ...
I0812 10:33:12.936060 18189 api_server.go:88] waiting for apiserver healthz status ...
I0812 10:33:12.936076 18189 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
I0812 10:33:12.943055 18189 api_server.go:279] https://192.168.39.217:8441/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0812 10:33:12.943077 18189 api_server.go:103] status: https://192.168.39.217:8441/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0812 10:33:13.436999 18189 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
I0812 10:33:13.443284 18189 api_server.go:279] https://192.168.39.217:8441/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0812 10:33:13.443299 18189 api_server.go:103] status: https://192.168.39.217:8441/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0812 10:33:13.936279 18189 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
I0812 10:33:13.940771 18189 api_server.go:279] https://192.168.39.217:8441/healthz returned 200:
ok
I0812 10:33:13.941825 18189 api_server.go:141] control plane version: v1.30.3
I0812 10:33:13.941841 18189 api_server.go:131] duration metric: took 1.005776109s to wait for apiserver health ...
I0812 10:33:13.941847 18189 system_pods.go:43] waiting for kube-system pods to appear ...
I0812 10:33:13.947654 18189 system_pods.go:59] 7 kube-system pods found
I0812 10:33:13.947672 18189 system_pods.go:61] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
I0812 10:33:13.947677 18189 system_pods.go:61] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
I0812 10:33:13.947682 18189 system_pods.go:61] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
I0812 10:33:13.947686 18189 system_pods.go:61] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
I0812 10:33:13.947690 18189 system_pods.go:61] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
I0812 10:33:13.947693 18189 system_pods.go:61] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
I0812 10:33:13.947696 18189 system_pods.go:61] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
I0812 10:33:13.947703 18189 system_pods.go:74] duration metric: took 5.850075ms to wait for pod list to return data ...
I0812 10:33:13.947710 18189 default_sa.go:34] waiting for default service account to be created ...
I0812 10:33:13.950653 18189 default_sa.go:45] found service account: "default"
I0812 10:33:13.950664 18189 default_sa.go:55] duration metric: took 2.9492ms for default service account to be created ...
I0812 10:33:13.950672 18189 system_pods.go:116] waiting for k8s-apps to be running ...
I0812 10:33:13.956236 18189 system_pods.go:86] 7 kube-system pods found
I0812 10:33:13.956253 18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
I0812 10:33:13.956260 18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
I0812 10:33:13.956265 18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
I0812 10:33:13.956271 18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
I0812 10:33:13.956276 18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
I0812 10:33:13.956280 18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
I0812 10:33:13.956283 18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
I0812 10:33:13.956297 18189 retry.go:31] will retry after 235.617264ms: missing components: kube-apiserver
I0812 10:33:14.198800 18189 system_pods.go:86] 7 kube-system pods found
I0812 10:33:14.198815 18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
I0812 10:33:14.198819 18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
I0812 10:33:14.198823 18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
I0812 10:33:14.198826 18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
I0812 10:33:14.198828 18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
I0812 10:33:14.198832 18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
I0812 10:33:14.198835 18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
I0812 10:33:14.198848 18189 retry.go:31] will retry after 273.302224ms: missing components: kube-apiserver
I0812 10:33:14.479226 18189 system_pods.go:86] 7 kube-system pods found
I0812 10:33:14.479241 18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
I0812 10:33:14.479245 18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
I0812 10:33:14.479249 18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
I0812 10:33:14.479252 18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
I0812 10:33:14.479255 18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
I0812 10:33:14.479258 18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
I0812 10:33:14.479261 18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
I0812 10:33:14.479274 18189 retry.go:31] will retry after 340.582831ms: missing components: kube-apiserver
I0812 10:33:14.827761 18189 system_pods.go:86] 7 kube-system pods found
I0812 10:33:14.827781 18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
I0812 10:33:14.827787 18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
I0812 10:33:14.827793 18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
I0812 10:33:14.827796 18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
I0812 10:33:14.827800 18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
I0812 10:33:14.827803 18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
I0812 10:33:14.827807 18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
I0812 10:33:14.827824 18189 retry.go:31] will retry after 507.416227ms: missing components: kube-apiserver
I0812 10:33:15.342252 18189 system_pods.go:86] 7 kube-system pods found
I0812 10:33:15.342266 18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
I0812 10:33:15.342272 18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
I0812 10:33:15.342275 18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
I0812 10:33:15.342279 18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
I0812 10:33:15.342282 18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
I0812 10:33:15.342285 18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
I0812 10:33:15.342287 18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
I0812 10:33:15.342301 18189 retry.go:31] will retry after 711.212653ms: missing components: kube-apiserver
I0812 10:33:16.060674 18189 system_pods.go:86] 7 kube-system pods found
I0812 10:33:16.060689 18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
I0812 10:33:16.060693 18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
I0812 10:33:16.060697 18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
I0812 10:33:16.060700 18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
I0812 10:33:16.060702 18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
I0812 10:33:16.060705 18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
I0812 10:33:16.060708 18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
I0812 10:33:16.060721 18189 retry.go:31] will retry after 895.133355ms: missing components: kube-apiserver
I0812 10:33:16.962316 18189 system_pods.go:86] 7 kube-system pods found
I0812 10:33:16.962331 18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
I0812 10:33:16.962336 18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
I0812 10:33:16.962339 18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
I0812 10:33:16.962343 18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
I0812 10:33:16.962347 18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
I0812 10:33:16.962350 18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
I0812 10:33:16.962352 18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
I0812 10:33:16.962366 18189 retry.go:31] will retry after 1.177307444s: missing components: kube-apiserver
I0812 10:33:18.146824 18189 system_pods.go:86] 7 kube-system pods found
I0812 10:33:18.146839 18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
I0812 10:33:18.146844 18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
I0812 10:33:18.146847 18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
I0812 10:33:18.146850 18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
I0812 10:33:18.146853 18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
I0812 10:33:18.146856 18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
I0812 10:33:18.146860 18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
I0812 10:33:18.146875 18189 retry.go:31] will retry after 1.125579278s: missing components: kube-apiserver
I0812 10:33:19.031928 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0812 10:33:19.161615 18189 main.go:141] libmachine: Making call to close driver server
I0812 10:33:19.161627 18189 main.go:141] libmachine: (functional-470148) Calling .Close
I0812 10:33:19.161946 18189 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:33:19.161956 18189 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 10:33:19.161966 18189 main.go:141] libmachine: Making call to close driver server
I0812 10:33:19.161974 18189 main.go:141] libmachine: (functional-470148) Calling .Close
I0812 10:33:19.162223 18189 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:33:19.162235 18189 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 10:33:19.168478 18189 main.go:141] libmachine: Making call to close driver server
I0812 10:33:19.168486 18189 main.go:141] libmachine: (functional-470148) Calling .Close
I0812 10:33:19.168737 18189 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:33:19.168748 18189 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 10:33:19.281407 18189 system_pods.go:86] 7 kube-system pods found
I0812 10:33:19.281422 18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
I0812 10:33:19.281426 18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
I0812 10:33:19.281430 18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
I0812 10:33:19.281433 18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
I0812 10:33:19.281435 18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
I0812 10:33:19.281438 18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
I0812 10:33:19.281440 18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
I0812 10:33:19.281455 18189 retry.go:31] will retry after 1.594907103s: missing components: kube-apiserver
I0812 10:33:20.883982 18189 system_pods.go:86] 7 kube-system pods found
I0812 10:33:20.883997 18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
I0812 10:33:20.884000 18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
I0812 10:33:20.884003 18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
I0812 10:33:20.884006 18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
I0812 10:33:20.884009 18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
I0812 10:33:20.884012 18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
I0812 10:33:20.884014 18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
I0812 10:33:20.884027 18189 retry.go:31] will retry after 1.709429198s: missing components: kube-apiserver
I0812 10:33:22.452284 18189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0812 10:33:22.600487 18189 system_pods.go:86] 7 kube-system pods found
I0812 10:33:22.600508 18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
I0812 10:33:22.600515 18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
I0812 10:33:22.600521 18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
I0812 10:33:22.600525 18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
I0812 10:33:22.600530 18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
I0812 10:33:22.600535 18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
I0812 10:33:22.600539 18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
I0812 10:33:22.600557 18189 retry.go:31] will retry after 2.50460952s: missing components: kube-apiserver
I0812 10:33:23.046599 18189 main.go:141] libmachine: Making call to close driver server
I0812 10:33:23.046614 18189 main.go:141] libmachine: (functional-470148) Calling .Close
I0812 10:33:23.046926 18189 main.go:141] libmachine: (functional-470148) DBG | Closing plugin on server side
I0812 10:33:23.046958 18189 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:33:23.046985 18189 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 10:33:23.046994 18189 main.go:141] libmachine: Making call to close driver server
I0812 10:33:23.047001 18189 main.go:141] libmachine: (functional-470148) Calling .Close
I0812 10:33:23.047245 18189 main.go:141] libmachine: Successfully made call to close driver server
I0812 10:33:23.047254 18189 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 10:33:23.049104 18189 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
I0812 10:33:23.050236 18189 addons.go:510] duration metric: took 54.515282213s for enable addons: enabled=[default-storageclass storage-provisioner]
I0812 10:33:25.113823 18189 system_pods.go:86] 7 kube-system pods found
I0812 10:33:25.113838 18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
I0812 10:33:25.113842 18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
I0812 10:33:25.113845 18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
I0812 10:33:25.113848 18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
I0812 10:33:25.113851 18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
I0812 10:33:25.113854 18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
I0812 10:33:25.113856 18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
I0812 10:33:25.113869 18189 retry.go:31] will retry after 2.390372657s: missing components: kube-apiserver
I0812 10:33:27.510907 18189 system_pods.go:86] 7 kube-system pods found
I0812 10:33:27.510921 18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
I0812 10:33:27.510925 18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
I0812 10:33:27.510929 18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
I0812 10:33:27.510932 18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
I0812 10:33:27.510934 18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
I0812 10:33:27.510937 18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
I0812 10:33:27.510940 18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
I0812 10:33:27.510952 18189 retry.go:31] will retry after 2.84289009s: missing components: kube-apiserver
I0812 10:33:30.360322 18189 system_pods.go:86] 7 kube-system pods found
I0812 10:33:30.360336 18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
I0812 10:33:30.360344 18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
I0812 10:33:30.360347 18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Pending
I0812 10:33:30.360350 18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
I0812 10:33:30.360353 18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
I0812 10:33:30.360356 18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
I0812 10:33:30.360359 18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
I0812 10:33:30.360374 18189 retry.go:31] will retry after 3.975491794s: missing components: kube-apiserver
I0812 10:33:34.342461 18189 system_pods.go:86] 7 kube-system pods found
I0812 10:33:34.342477 18189 system_pods.go:89] "coredns-7db6d8ff4d-kvjbq" [814304ec-5e53-4f37-8785-64c6add328d3] Running
I0812 10:33:34.342480 18189 system_pods.go:89] "etcd-functional-470148" [3eb734ff-85c0-4aca-a917-d5cd68427a9a] Running
I0812 10:33:34.342486 18189 system_pods.go:89] "kube-apiserver-functional-470148" [8366a459-a799-48ee-a137-2a3b7cab1261] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0812 10:33:34.342490 18189 system_pods.go:89] "kube-controller-manager-functional-470148" [79b2728d-65d6-470e-bf89-6f82897b90f2] Running
I0812 10:33:34.342496 18189 system_pods.go:89] "kube-proxy-xmv5n" [33ebde81-959a-4d85-a89b-b99521c05eff] Running
I0812 10:33:34.342499 18189 system_pods.go:89] "kube-scheduler-functional-470148" [1158c6b5-7c45-4952-aa27-1d27326019ea] Running
I0812 10:33:34.342502 18189 system_pods.go:89] "storage-provisioner" [6401a106-0623-4d76-a310-52113a158364] Running
I0812 10:33:34.342508 18189 system_pods.go:126] duration metric: took 20.391831948s to wait for k8s-apps to be running ...
I0812 10:33:34.342514 18189 system_svc.go:44] waiting for kubelet service to be running ....
I0812 10:33:34.342560 18189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0812 10:33:34.358918 18189 system_svc.go:56] duration metric: took 16.390216ms WaitForService to wait for kubelet
I0812 10:33:34.358942 18189 kubeadm.go:582] duration metric: took 1m5.824030436s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0812 10:33:34.358965 18189 node_conditions.go:102] verifying NodePressure condition ...
I0812 10:33:34.363878 18189 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0812 10:33:34.363891 18189 node_conditions.go:123] node cpu capacity is 2
I0812 10:33:34.363901 18189 node_conditions.go:105] duration metric: took 4.932646ms to run NodePressure ...
I0812 10:33:34.363912 18189 start.go:241] waiting for startup goroutines ...
I0812 10:33:34.363918 18189 start.go:246] waiting for cluster config update ...
I0812 10:33:34.363927 18189 start.go:255] writing updated cluster config ...
I0812 10:33:34.364208 18189 ssh_runner.go:195] Run: rm -f paused
I0812 10:33:34.414649 18189 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
I0812 10:33:34.416576 18189 out.go:177] * Done! kubectl is now configured to use "functional-470148" cluster and "default" namespace by default
==> Docker <==
Aug 12 10:32:14 functional-470148 dockerd[6831]: time="2024-08-12T10:32:14.088857232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 12 10:32:14 functional-470148 dockerd[6831]: time="2024-08-12T10:32:14.088874275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 12 10:32:14 functional-470148 dockerd[6831]: time="2024-08-12T10:32:14.089079541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 12 10:32:14 functional-470148 dockerd[6831]: time="2024-08-12T10:32:14.359731114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 12 10:32:14 functional-470148 dockerd[6831]: time="2024-08-12T10:32:14.359817010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 12 10:32:14 functional-470148 dockerd[6831]: time="2024-08-12T10:32:14.359832634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 12 10:32:14 functional-470148 dockerd[6831]: time="2024-08-12T10:32:14.361076153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 12 10:32:58 functional-470148 dockerd[6824]: time="2024-08-12T10:32:58.276220310Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=f506135f8c8d6425b1299c699f3cf8b56d00bdc1826587feff07686f9ad07b73
Aug 12 10:32:58 functional-470148 dockerd[6824]: time="2024-08-12T10:32:58.359486843Z" level=info msg="ignoring event" container=f506135f8c8d6425b1299c699f3cf8b56d00bdc1826587feff07686f9ad07b73 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 10:32:58 functional-470148 dockerd[6831]: time="2024-08-12T10:32:58.361262616Z" level=info msg="shim disconnected" id=f506135f8c8d6425b1299c699f3cf8b56d00bdc1826587feff07686f9ad07b73 namespace=moby
Aug 12 10:32:58 functional-470148 dockerd[6831]: time="2024-08-12T10:32:58.361386890Z" level=warning msg="cleaning up after shim disconnected" id=f506135f8c8d6425b1299c699f3cf8b56d00bdc1826587feff07686f9ad07b73 namespace=moby
Aug 12 10:32:58 functional-470148 dockerd[6831]: time="2024-08-12T10:32:58.361403489Z" level=info msg="cleaning up dead shim" namespace=moby
Aug 12 10:32:58 functional-470148 dockerd[6831]: time="2024-08-12T10:32:58.456732962Z" level=info msg="shim disconnected" id=b9ee3e609048a776e4d8a63d2dae98cc445d9d0de63f78a48be2e60a079d89a8 namespace=moby
Aug 12 10:32:58 functional-470148 dockerd[6824]: time="2024-08-12T10:32:58.457060373Z" level=info msg="ignoring event" container=b9ee3e609048a776e4d8a63d2dae98cc445d9d0de63f78a48be2e60a079d89a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 12 10:32:58 functional-470148 dockerd[6831]: time="2024-08-12T10:32:58.457221434Z" level=warning msg="cleaning up after shim disconnected" id=b9ee3e609048a776e4d8a63d2dae98cc445d9d0de63f78a48be2e60a079d89a8 namespace=moby
Aug 12 10:32:58 functional-470148 dockerd[6831]: time="2024-08-12T10:32:58.457285361Z" level=info msg="cleaning up dead shim" namespace=moby
Aug 12 10:33:10 functional-470148 dockerd[6831]: time="2024-08-12T10:33:10.496989562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 12 10:33:10 functional-470148 dockerd[6831]: time="2024-08-12T10:33:10.497152398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 12 10:33:10 functional-470148 dockerd[6831]: time="2024-08-12T10:33:10.497166573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 12 10:33:10 functional-470148 dockerd[6831]: time="2024-08-12T10:33:10.497270738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 12 10:33:10 functional-470148 cri-dockerd[7108]: time="2024-08-12T10:33:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a8a55e3be83be0967bb96880a5d5688265c092fc63c11b376e65c13596416aa9/resolv.conf as [nameserver 192.168.122.1]"
Aug 12 10:33:10 functional-470148 dockerd[6831]: time="2024-08-12T10:33:10.677172402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 12 10:33:10 functional-470148 dockerd[6831]: time="2024-08-12T10:33:10.677242091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 12 10:33:10 functional-470148 dockerd[6831]: time="2024-08-12T10:33:10.677253767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 12 10:33:10 functional-470148 dockerd[6831]: time="2024-08-12T10:33:10.677330068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
c8647e19fcd0b 1f6d574d502f3 25 seconds ago Running kube-apiserver 0 a8a55e3be83be kube-apiserver-functional-470148
5eb2a7794bcb5 cbb01a7bd410d About a minute ago Running coredns 3 c97cca48fa507 coredns-7db6d8ff4d-kvjbq
bf47ec590592c 6e38f40d628db About a minute ago Running storage-provisioner 3 e3788c8ecde71 storage-provisioner
c4a5c937b5ec6 55bb025d2cfa5 About a minute ago Running kube-proxy 3 f243e7d27f57e kube-proxy-xmv5n
a25c22de2da62 3861cfcd7c04c About a minute ago Running etcd 3 23608e8ec34d6 etcd-functional-470148
b869b0d288ea3 3edc18e7b7672 About a minute ago Running kube-scheduler 3 94eb2244f55b4 kube-scheduler-functional-470148
b9857f8f48fd9 76932a3b37d7e About a minute ago Running kube-controller-manager 3 9f980ac6fcab4 kube-controller-manager-functional-470148
4f0c8adf0dda6 cbb01a7bd410d 2 minutes ago Exited coredns 2 6847d5bfe08ce coredns-7db6d8ff4d-kvjbq
1f1124951798c 6e38f40d628db 2 minutes ago Exited storage-provisioner 2 6cd4ba5fbd18f storage-provisioner
16616cb9ce5d7 55bb025d2cfa5 2 minutes ago Exited kube-proxy 2 ba1224227c458 kube-proxy-xmv5n
7bdc8c688102e 3edc18e7b7672 2 minutes ago Exited kube-scheduler 2 e46ea15b50bcf kube-scheduler-functional-470148
a82fb1fec5521 3861cfcd7c04c 2 minutes ago Exited etcd 2 efdfc20ff005e etcd-functional-470148
4360cfb87e380 76932a3b37d7e 2 minutes ago Exited kube-controller-manager 2 d6db8459618c8 kube-controller-manager-functional-470148
==> coredns [4f0c8adf0dda] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:59924 - 10980 "HINFO IN 2879316814154866209.1178878182815758768. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.077865028s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> coredns [5eb2a7794bcb] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:55256 - 55467 "HINFO IN 5285818657214602478.7816295238262685673. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034383267s
==> describe nodes <==
Name: functional-470148
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-470148
kubernetes.io/os=linux
minikube.k8s.io/commit=1f2a4d2effced1c491a7cae8e84c3938ed24c7a7
minikube.k8s.io/name=functional-470148
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_08_12T10_30_03_0700
minikube.k8s.io/version=v1.33.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 12 Aug 2024 10:29:59 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-470148
AcquireTime: <unset>
RenewTime: Mon, 12 Aug 2024 10:33:34 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 12 Aug 2024 10:33:14 +0000 Mon, 12 Aug 2024 10:33:14 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 12 Aug 2024 10:33:14 +0000 Mon, 12 Aug 2024 10:33:14 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 12 Aug 2024 10:33:14 +0000 Mon, 12 Aug 2024 10:33:14 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 12 Aug 2024 10:33:14 +0000 Mon, 12 Aug 2024 10:33:14 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.217
Hostname: functional-470148
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912788Ki
pods: 110
System Info:
Machine ID: 05a003fadbaa4cf69bef382bbd2ca450
System UUID: 05a003fa-dbaa-4cf6-9bef-382bbd2ca450
Boot ID: 5034c9d8-6737-4bcc-8fd3-dcd824db6967
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.1.1
Kubelet Version: v1.30.3
Kube-Proxy Version: v1.30.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-7db6d8ff4d-kvjbq 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (4%!)(MISSING) 3m18s
kube-system etcd-functional-470148 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (2%!)(MISSING) 0 (0%!)(MISSING) 3m32s
kube-system kube-apiserver-functional-470148 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 23s
kube-system kube-controller-manager-functional-470148 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m32s
kube-system kube-proxy-xmv5n 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m18s
kube-system kube-scheduler-functional-470148 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m32s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m17s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (4%!)(MISSING) 170Mi (4%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 80s kube-proxy
Normal Starting 2m8s kube-proxy
Normal Starting 3m16s kube-proxy
Normal Starting 3m33s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3m32s kubelet Node functional-470148 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m32s kubelet Node functional-470148 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m32s kubelet Node functional-470148 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 3m32s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 3m28s kubelet Node functional-470148 status is now: NodeReady
Normal RegisteredNode 3m19s node-controller Node functional-470148 event: Registered Node functional-470148 in Controller
Normal NodeHasNoDiskPressure 2m15s (x8 over 2m15s) kubelet Node functional-470148 status is now: NodeHasNoDiskPressure
Normal Starting 2m15s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m15s (x8 over 2m15s) kubelet Node functional-470148 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 2m15s (x7 over 2m15s) kubelet Node functional-470148 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m15s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 117s node-controller Node functional-470148 event: Registered Node functional-470148 in Controller
Normal Starting 87s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 87s (x8 over 87s) kubelet Node functional-470148 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 87s (x8 over 87s) kubelet Node functional-470148 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 87s (x7 over 87s) kubelet Node functional-470148 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 87s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 69s node-controller Node functional-470148 event: Registered Node functional-470148 in Controller
Normal NodeNotReady 23s node-controller Node functional-470148 status is now: NodeNotReady
==> dmesg <==
[ +0.148912] systemd-fstab-generator[3936]: Ignoring "noauto" option for root device
[ +0.178291] systemd-fstab-generator[3951]: Ignoring "noauto" option for root device
[ +0.719594] systemd-fstab-generator[4151]: Ignoring "noauto" option for root device
[ +1.224171] kauditd_printk_skb: 179 callbacks suppressed
[ +2.167579] systemd-fstab-generator[5041]: Ignoring "noauto" option for root device
[ +5.552671] kauditd_printk_skb: 74 callbacks suppressed
[ +12.467621] kauditd_printk_skb: 31 callbacks suppressed
[ +2.320496] systemd-fstab-generator[5933]: Ignoring "noauto" option for root device
[ +11.245361] systemd-fstab-generator[6369]: Ignoring "noauto" option for root device
[ +0.108876] kauditd_printk_skb: 14 callbacks suppressed
[ +0.266839] systemd-fstab-generator[6402]: Ignoring "noauto" option for root device
[ +0.196707] systemd-fstab-generator[6414]: Ignoring "noauto" option for root device
[ +0.197869] systemd-fstab-generator[6429]: Ignoring "noauto" option for root device
[ +5.289117] kauditd_printk_skb: 89 callbacks suppressed
[Aug12 10:32] systemd-fstab-generator[7057]: Ignoring "noauto" option for root device
[ +0.140992] systemd-fstab-generator[7069]: Ignoring "noauto" option for root device
[ +0.139973] systemd-fstab-generator[7081]: Ignoring "noauto" option for root device
[ +0.163099] systemd-fstab-generator[7096]: Ignoring "noauto" option for root device
[ +0.571024] systemd-fstab-generator[7266]: Ignoring "noauto" option for root device
[ +1.784448] systemd-fstab-generator[7388]: Ignoring "noauto" option for root device
[ +0.082122] kauditd_printk_skb: 137 callbacks suppressed
[ +5.466512] kauditd_printk_skb: 52 callbacks suppressed
[ +12.725032] kauditd_printk_skb: 31 callbacks suppressed
[ +2.282896] systemd-fstab-generator[8427]: Ignoring "noauto" option for root device
[ +29.763210] kauditd_printk_skb: 16 callbacks suppressed
==> etcd [a25c22de2da6] <==
{"level":"info","ts":"2024-08-12T10:32:10.115241Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2024-08-12T10:32:10.115252Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2024-08-12T10:32:10.11576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd switched to configuration voters=(11573293933243462141)"}
{"level":"info","ts":"2024-08-12T10:32:10.118075Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8f39477865362797","local-member-id":"a09c9983ac28f1fd","added-peer-id":"a09c9983ac28f1fd","added-peer-peer-urls":["https://192.168.39.217:2380"]}
{"level":"info","ts":"2024-08-12T10:32:10.118368Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8f39477865362797","local-member-id":"a09c9983ac28f1fd","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-12T10:32:10.119502Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2024-08-12T10:32:10.125392Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a09c9983ac28f1fd","initial-advertise-peer-urls":["https://192.168.39.217:2380"],"listen-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.217:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2024-08-12T10:32:10.125503Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-08-12T10:32:10.119639Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-12T10:32:10.119824Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.217:2380"}
{"level":"info","ts":"2024-08-12T10:32:10.13205Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.217:2380"}
{"level":"info","ts":"2024-08-12T10:32:11.645203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd is starting a new election at term 3"}
{"level":"info","ts":"2024-08-12T10:32:11.645396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became pre-candidate at term 3"}
{"level":"info","ts":"2024-08-12T10:32:11.645468Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd received MsgPreVoteResp from a09c9983ac28f1fd at term 3"}
{"level":"info","ts":"2024-08-12T10:32:11.645551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became candidate at term 4"}
{"level":"info","ts":"2024-08-12T10:32:11.645621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd received MsgVoteResp from a09c9983ac28f1fd at term 4"}
{"level":"info","ts":"2024-08-12T10:32:11.645724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became leader at term 4"}
{"level":"info","ts":"2024-08-12T10:32:11.645761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a09c9983ac28f1fd elected leader a09c9983ac28f1fd at term 4"}
{"level":"info","ts":"2024-08-12T10:32:11.652142Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-12T10:32:11.652163Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a09c9983ac28f1fd","local-member-attributes":"{Name:functional-470148 ClientURLs:[https://192.168.39.217:2379]}","request-path":"/0/members/a09c9983ac28f1fd/attributes","cluster-id":"8f39477865362797","publish-timeout":"7s"}
{"level":"info","ts":"2024-08-12T10:32:11.652557Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-12T10:32:11.652839Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-08-12T10:32:11.652925Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-08-12T10:32:11.654638Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-08-12T10:32:11.654902Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.217:2379"}
==> etcd [a82fb1fec552] <==
{"level":"info","ts":"2024-08-12T10:31:21.673361Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.217:2380"}
{"level":"info","ts":"2024-08-12T10:31:23.250506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd is starting a new election at term 2"}
{"level":"info","ts":"2024-08-12T10:31:23.25141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became pre-candidate at term 2"}
{"level":"info","ts":"2024-08-12T10:31:23.251633Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd received MsgPreVoteResp from a09c9983ac28f1fd at term 2"}
{"level":"info","ts":"2024-08-12T10:31:23.251721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became candidate at term 3"}
{"level":"info","ts":"2024-08-12T10:31:23.251825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd received MsgVoteResp from a09c9983ac28f1fd at term 3"}
{"level":"info","ts":"2024-08-12T10:31:23.251945Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became leader at term 3"}
{"level":"info","ts":"2024-08-12T10:31:23.252088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a09c9983ac28f1fd elected leader a09c9983ac28f1fd at term 3"}
{"level":"info","ts":"2024-08-12T10:31:23.258886Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-12T10:31:23.25884Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a09c9983ac28f1fd","local-member-attributes":"{Name:functional-470148 ClientURLs:[https://192.168.39.217:2379]}","request-path":"/0/members/a09c9983ac28f1fd/attributes","cluster-id":"8f39477865362797","publish-timeout":"7s"}
{"level":"info","ts":"2024-08-12T10:31:23.259804Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-12T10:31:23.260299Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-08-12T10:31:23.260446Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-08-12T10:31:23.26125Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.217:2379"}
{"level":"info","ts":"2024-08-12T10:31:23.2624Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-08-12T10:31:52.477946Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2024-08-12T10:31:52.478089Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-470148","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"]}
{"level":"warn","ts":"2024-08-12T10:31:52.478265Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2024-08-12T10:31:52.478384Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2024-08-12T10:31:52.517139Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.217:2379: use of closed network connection"}
{"level":"warn","ts":"2024-08-12T10:31:52.517322Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.217:2379: use of closed network connection"}
{"level":"info","ts":"2024-08-12T10:31:52.517381Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"a09c9983ac28f1fd","current-leader-member-id":"a09c9983ac28f1fd"}
{"level":"info","ts":"2024-08-12T10:31:52.520745Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.217:2380"}
{"level":"info","ts":"2024-08-12T10:31:52.520959Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.217:2380"}
{"level":"info","ts":"2024-08-12T10:31:52.520985Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-470148","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"]}
==> kernel <==
10:33:35 up 4 min, 0 users, load average: 1.23, 0.82, 0.34
Linux functional-470148 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [c8647e19fcd0] <==
I0812 10:33:12.448726 1 naming_controller.go:291] Starting NamingConditionController
I0812 10:33:12.448758 1 establishing_controller.go:76] Starting EstablishingController
I0812 10:33:12.448886 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0812 10:33:12.448922 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0812 10:33:12.449052 1 crd_finalizer.go:266] Starting CRDFinalizer
I0812 10:33:12.527824 1 shared_informer.go:320] Caches are synced for node_authorizer
I0812 10:33:12.529068 1 shared_informer.go:320] Caches are synced for configmaps
I0812 10:33:12.529445 1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
I0812 10:33:12.531717 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0812 10:33:12.535330 1 shared_informer.go:320] Caches are synced for crd-autoregister
I0812 10:33:12.536943 1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
I0812 10:33:12.536985 1 policy_source.go:224] refreshing policies
I0812 10:33:12.537071 1 aggregator.go:165] initial CRD sync complete...
I0812 10:33:12.537098 1 autoregister_controller.go:141] Starting autoregister controller
I0812 10:33:12.537107 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0812 10:33:12.537112 1 cache.go:39] Caches are synced for autoregister controller
I0812 10:33:12.581679 1 handler_discovery.go:447] Starting ResourceDiscoveryManager
I0812 10:33:12.584434 1 apf_controller.go:379] Running API Priority and Fairness config worker
I0812 10:33:12.584466 1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
I0812 10:33:12.585259 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0812 10:33:12.587531 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0812 10:33:13.438412 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W0812 10:33:13.722238 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217]
I0812 10:33:13.723984 1 controller.go:615] quota admission added evaluator for: endpoints
I0812 10:33:13.730807 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
==> kube-controller-manager [4360cfb87e38] <==
I0812 10:31:37.884957 1 shared_informer.go:320] Caches are synced for certificate-csrapproving
I0812 10:31:37.908092 1 shared_informer.go:320] Caches are synced for bootstrap_signer
I0812 10:31:37.909951 1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
I0812 10:31:37.913150 1 shared_informer.go:320] Caches are synced for daemon sets
I0812 10:31:37.915919 1 shared_informer.go:320] Caches are synced for ReplicationController
I0812 10:31:37.916255 1 shared_informer.go:320] Caches are synced for crt configmap
I0812 10:31:37.920717 1 shared_informer.go:320] Caches are synced for namespace
I0812 10:31:37.941105 1 shared_informer.go:320] Caches are synced for service account
I0812 10:31:37.978721 1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
I0812 10:31:38.000787 1 shared_informer.go:320] Caches are synced for endpoint
I0812 10:31:38.013617 1 shared_informer.go:320] Caches are synced for taint
I0812 10:31:38.014781 1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
I0812 10:31:38.015182 1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-470148"
I0812 10:31:38.015408 1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
I0812 10:31:38.053047 1 shared_informer.go:320] Caches are synced for expand
I0812 10:31:38.065554 1 shared_informer.go:320] Caches are synced for persistent volume
I0812 10:31:38.088413 1 shared_informer.go:320] Caches are synced for ephemeral
I0812 10:31:38.089813 1 shared_informer.go:320] Caches are synced for attach detach
I0812 10:31:38.096383 1 shared_informer.go:320] Caches are synced for PVC protection
I0812 10:31:38.103703 1 shared_informer.go:320] Caches are synced for stateful set
I0812 10:31:38.122802 1 shared_informer.go:320] Caches are synced for resource quota
I0812 10:31:38.123142 1 shared_informer.go:320] Caches are synced for resource quota
I0812 10:31:38.517414 1 shared_informer.go:320] Caches are synced for garbage collector
I0812 10:31:38.517721 1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
I0812 10:31:38.523977 1 shared_informer.go:320] Caches are synced for garbage collector
==> kube-controller-manager [b9857f8f48fd] <==
I0812 10:32:26.277064 1 shared_informer.go:320] Caches are synced for deployment
I0812 10:32:26.279713 1 shared_informer.go:320] Caches are synced for taint-eviction-controller
I0812 10:32:26.287155 1 shared_informer.go:320] Caches are synced for endpoint_slice
I0812 10:32:26.301176 1 shared_informer.go:320] Caches are synced for ephemeral
I0812 10:32:26.303571 1 shared_informer.go:320] Caches are synced for HPA
I0812 10:32:26.316214 1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
I0812 10:32:26.370026 1 shared_informer.go:320] Caches are synced for attach detach
I0812 10:32:26.458666 1 shared_informer.go:320] Caches are synced for resource quota
I0812 10:32:26.482194 1 shared_informer.go:320] Caches are synced for resource quota
I0812 10:32:26.483355 1 shared_informer.go:320] Caches are synced for disruption
I0812 10:32:26.900942 1 shared_informer.go:320] Caches are synced for garbage collector
I0812 10:32:26.901107 1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
I0812 10:32:26.910649 1 shared_informer.go:320] Caches are synced for garbage collector
E0812 10:32:56.484343 1 resource_quota_controller.go:440] failed to discover resources: Get "https://192.168.39.217:8441/api": dial tcp 192.168.39.217:8441: connect: connection refused
I0812 10:32:56.912339 1 garbagecollector.go:828] "failed to discover preferred resources" logger="garbage-collector-controller" error="Get \"https://192.168.39.217:8441/api\": dial tcp 192.168.39.217:8441: connect: connection refused"
E0812 10:33:06.272291 1 node_lifecycle_controller.go:973] "Error updating node" err="Put \"https://192.168.39.217:8441/api/v1/nodes/functional-470148/status\": dial tcp 192.168.39.217:8441: connect: connection refused" logger="node-lifecycle-controller" node="functional-470148"
E0812 10:33:06.273225 1 node_lifecycle_controller.go:715] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="functional-470148"
E0812 10:33:06.273268 1 node_lifecycle_controller.go:720] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.168.39.217:8441/api/v1/nodes/functional-470148\": dial tcp 192.168.39.217:8441: connect: connection refused" logger="node-lifecycle-controller" node=""
I0812 10:33:11.274125 1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
E0812 10:33:12.529970 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ControllerRevision: unknown (get controllerrevisions.apps)
I0812 10:33:12.705921 1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-apiserver-functional-470148" err="Operation cannot be fulfilled on pods \"kube-apiserver-functional-470148\": StorageError: invalid object, Code: 4, Key: /registry/pods/kube-system/kube-apiserver-functional-470148, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: c5774f60-aeeb-42e8-b996-40a18d4353a5, UID in object meta: 8366a459-a799-48ee-a137-2a3b7cab1261"
E0812 10:33:12.706209 1 node_lifecycle_controller.go:753] unable to mark all pods NotReady on node functional-470148: Operation cannot be fulfilled on pods "kube-apiserver-functional-470148": StorageError: invalid object, Code: 4, Key: /registry/pods/kube-system/kube-apiserver-functional-470148, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: c5774f60-aeeb-42e8-b996-40a18d4353a5, UID in object meta: 8366a459-a799-48ee-a137-2a3b7cab1261; queuing for retry
I0812 10:33:12.706427 1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
E0812 10:33:17.713602 1 node_lifecycle_controller.go:973] "Error updating node" err="Operation cannot be fulfilled on nodes \"functional-470148\": the object has been modified; please apply your changes to the latest version and try again" logger="node-lifecycle-controller" node="functional-470148"
I0812 10:33:17.737569 1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
==> kube-proxy [16616cb9ce5d] <==
I0812 10:31:26.471691 1 server_linux.go:69] "Using iptables proxy"
I0812 10:31:26.498389 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.217"]
I0812 10:31:26.535834 1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
I0812 10:31:26.535875 1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0812 10:31:26.535897 1 server_linux.go:165] "Using iptables Proxier"
I0812 10:31:26.538479 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0812 10:31:26.538946 1 server.go:872] "Version info" version="v1.30.3"
I0812 10:31:26.539177 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0812 10:31:26.540496 1 config.go:192] "Starting service config controller"
I0812 10:31:26.540729 1 shared_informer.go:313] Waiting for caches to sync for service config
I0812 10:31:26.540871 1 config.go:101] "Starting endpoint slice config controller"
I0812 10:31:26.540937 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0812 10:31:26.541576 1 config.go:319] "Starting node config controller"
I0812 10:31:26.543101 1 shared_informer.go:313] Waiting for caches to sync for node config
I0812 10:31:26.641840 1 shared_informer.go:320] Caches are synced for service config
I0812 10:31:26.641986 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0812 10:31:26.644149 1 shared_informer.go:320] Caches are synced for node config
==> kube-proxy [c4a5c937b5ec] <==
I0812 10:32:14.425507 1 server_linux.go:69] "Using iptables proxy"
I0812 10:32:14.464625 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.217"]
I0812 10:32:14.520413 1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
I0812 10:32:14.520452 1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0812 10:32:14.520471 1 server_linux.go:165] "Using iptables Proxier"
I0812 10:32:14.524502 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0812 10:32:14.524922 1 server.go:872] "Version info" version="v1.30.3"
I0812 10:32:14.525253 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0812 10:32:14.526732 1 config.go:192] "Starting service config controller"
I0812 10:32:14.527469 1 shared_informer.go:313] Waiting for caches to sync for service config
I0812 10:32:14.527674 1 config.go:101] "Starting endpoint slice config controller"
I0812 10:32:14.527789 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0812 10:32:14.528684 1 config.go:319] "Starting node config controller"
I0812 10:32:14.528909 1 shared_informer.go:313] Waiting for caches to sync for node config
I0812 10:32:14.629606 1 shared_informer.go:320] Caches are synced for node config
I0812 10:32:14.629878 1 shared_informer.go:320] Caches are synced for service config
I0812 10:32:14.629909 1 shared_informer.go:320] Caches are synced for endpoint slice config
==> kube-scheduler [7bdc8c688102] <==
I0812 10:31:22.448095 1 serving.go:380] Generated self-signed cert in-memory
W0812 10:31:24.657670 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0812 10:31:24.658098 1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0812 10:31:24.658248 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
W0812 10:31:24.658289 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0812 10:31:24.728787 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
I0812 10:31:24.729030 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0812 10:31:24.733239 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I0812 10:31:24.733566 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0812 10:31:24.733686 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0812 10:31:24.733758 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0812 10:31:24.833931 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0812 10:31:52.526862 1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
I0812 10:31:52.527635 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0812 10:31:52.527976 1 run.go:74] "command failed" err="finished without leader elect"
==> kube-scheduler [b869b0d288ea] <==
I0812 10:32:10.828703 1 serving.go:380] Generated self-signed cert in-memory
W0812 10:32:13.134377 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0812 10:32:13.134596 1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0812 10:32:13.134625 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
W0812 10:32:13.134851 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0812 10:32:13.208602 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
I0812 10:32:13.210031 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0812 10:32:13.212485 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I0812 10:32:13.214537 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0812 10:32:13.214828 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0812 10:32:13.215046 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0812 10:32:13.315960 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Aug 12 10:32:58 functional-470148 kubelet[7395]: I0812 10:32:58.854630 7395 scope.go:117] "RemoveContainer" containerID="b318d7a1a7227ed13fc664e68963eefc7fea3540d3cab5cbf8a1c775b881c1b5"
Aug 12 10:32:58 functional-470148 kubelet[7395]: E0812 10:32:58.855589 7395 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: b318d7a1a7227ed13fc664e68963eefc7fea3540d3cab5cbf8a1c775b881c1b5" containerID="b318d7a1a7227ed13fc664e68963eefc7fea3540d3cab5cbf8a1c775b881c1b5"
Aug 12 10:32:58 functional-470148 kubelet[7395]: I0812 10:32:58.855626 7395 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"b318d7a1a7227ed13fc664e68963eefc7fea3540d3cab5cbf8a1c775b881c1b5"} err="failed to get container status \"b318d7a1a7227ed13fc664e68963eefc7fea3540d3cab5cbf8a1c775b881c1b5\": rpc error: code = Unknown desc = Error response from daemon: No such container: b318d7a1a7227ed13fc664e68963eefc7fea3540d3cab5cbf8a1c775b881c1b5"
Aug 12 10:33:00 functional-470148 kubelet[7395]: E0812 10:33:00.250281 7395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-470148?timeout=10s\": dial tcp 192.168.39.217:8441: connect: connection refused" interval="7s"
Aug 12 10:33:00 functional-470148 kubelet[7395]: I0812 10:33:00.376145 7395 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e30d8b817ecaa1cdd5cb7a5d22f1dcb" path="/var/lib/kubelet/pods/2e30d8b817ecaa1cdd5cb7a5d22f1dcb/volumes"
Aug 12 10:33:04 functional-470148 kubelet[7395]: E0812 10:33:04.445953 7395 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-470148\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-470148?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.217:8441: connect: connection refused"
Aug 12 10:33:04 functional-470148 kubelet[7395]: E0812 10:33:04.447161 7395 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-470148\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-470148?timeout=10s\": dial tcp 192.168.39.217:8441: connect: connection refused"
Aug 12 10:33:04 functional-470148 kubelet[7395]: E0812 10:33:04.447658 7395 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-470148\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-470148?timeout=10s\": dial tcp 192.168.39.217:8441: connect: connection refused"
Aug 12 10:33:04 functional-470148 kubelet[7395]: E0812 10:33:04.448227 7395 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-470148\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-470148?timeout=10s\": dial tcp 192.168.39.217:8441: connect: connection refused"
Aug 12 10:33:04 functional-470148 kubelet[7395]: E0812 10:33:04.448919 7395 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-470148\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-470148?timeout=10s\": dial tcp 192.168.39.217:8441: connect: connection refused"
Aug 12 10:33:04 functional-470148 kubelet[7395]: E0812 10:33:04.448987 7395 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
Aug 12 10:33:07 functional-470148 kubelet[7395]: E0812 10:33:07.251922 7395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-470148?timeout=10s\": dial tcp 192.168.39.217:8441: connect: connection refused" interval="7s"
Aug 12 10:33:08 functional-470148 kubelet[7395]: I0812 10:33:08.371903 7395 status_manager.go:853] "Failed to get status for pod" podUID="407ce3b9e60bdbc54f8a7242fded82cc" pod="kube-system/kube-scheduler-functional-470148" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-470148\": dial tcp 192.168.39.217:8441: connect: connection refused"
Aug 12 10:33:08 functional-470148 kubelet[7395]: E0812 10:33:08.405149 7395 iptables.go:577] "Could not set up iptables canary" err=<
Aug 12 10:33:08 functional-470148 kubelet[7395]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
Aug 12 10:33:08 functional-470148 kubelet[7395]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Aug 12 10:33:08 functional-470148 kubelet[7395]: Perhaps ip6tables or your kernel needs to be upgraded.
Aug 12 10:33:08 functional-470148 kubelet[7395]: > table="nat" chain="KUBE-KUBELET-CANARY"
Aug 12 10:33:08 functional-470148 kubelet[7395]: E0812 10:33:08.683229 7395 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.217:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-470148.17eaf499af0c5733 kube-system 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-470148,UID:2e30d8b817ecaa1cdd5cb7a5d22f1dcb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.39.217:8441/readyz\": dial tcp 192.168.39.217:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-470148,},FirstTimestamp:2024-08-12 10:32:28.326631219 +0000 UTC m=+20.180387632,LastTimestamp:2024-08-12 10:32:28.326631219 +0000 UTC m=+20.1
80387632,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-470148,}"
Aug 12 10:33:10 functional-470148 kubelet[7395]: I0812 10:33:10.372903 7395 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-470148" podUID="c5774f60-aeeb-42e8-b996-40a18d4353a5"
Aug 12 10:33:10 functional-470148 kubelet[7395]: E0812 10:33:10.374133 7395 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-470148\": dial tcp 192.168.39.217:8441: connect: connection refused" pod="kube-system/kube-apiserver-functional-470148"
Aug 12 10:33:10 functional-470148 kubelet[7395]: I0812 10:33:10.376736 7395 status_manager.go:853] "Failed to get status for pod" podUID="407ce3b9e60bdbc54f8a7242fded82cc" pod="kube-system/kube-scheduler-functional-470148" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-470148\": dial tcp 192.168.39.217:8441: connect: connection refused"
Aug 12 10:33:10 functional-470148 kubelet[7395]: I0812 10:33:10.937474 7395 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-470148" podUID="c5774f60-aeeb-42e8-b996-40a18d4353a5"
Aug 12 10:33:12 functional-470148 kubelet[7395]: I0812 10:33:12.641374 7395 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-470148"
Aug 12 10:33:12 functional-470148 kubelet[7395]: I0812 10:33:12.951497 7395 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-470148" podUID="c5774f60-aeeb-42e8-b996-40a18d4353a5"
==> storage-provisioner [1f1124951798] <==
I0812 10:31:26.338246 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0812 10:31:26.373735 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0812 10:31:26.373810 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0812 10:31:43.788067 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0812 10:31:43.788567 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-470148_26e9557e-bfbe-420d-986e-c75191364b7c!
I0812 10:31:43.789649 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f202fda3-e383-4a89-984d-ba1a4b34f369", APIVersion:"v1", ResourceVersion:"538", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-470148_26e9557e-bfbe-420d-986e-c75191364b7c became leader
I0812 10:31:43.890713 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-470148_26e9557e-bfbe-420d-986e-c75191364b7c!
==> storage-provisioner [bf47ec590592] <==
I0812 10:32:14.334686 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0812 10:32:14.391035 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0812 10:32:14.391149 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
E0812 10:32:28.765377 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0812 10:32:31.786216 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0812 10:32:35.435977 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0812 10:32:37.595288 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0812 10:32:39.971773 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0812 10:32:42.205318 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0812 10:32:44.928360 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0812 10:32:48.166155 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0812 10:32:52.120382 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0812 10:32:54.635341 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0812 10:32:57.550134 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0812 10:33:00.314445 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0812 10:33:03.440854 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0812 10:33:06.121760 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0812 10:33:08.826465 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0812 10:33:12.457455 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0812 10:33:14.983893 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0812 10:33:17.472807 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
I0812 10:33:20.354806 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0812 10:33:20.355467 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f202fda3-e383-4a89-984d-ba1a4b34f369", APIVersion:"v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-470148_da0c0822-cdac-4a26-80c6-e53e53138a39 became leader
I0812 10:33:20.355586 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-470148_da0c0822-cdac-4a26-80c6-e53e53138a39!
I0812 10:33:20.456214 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-470148_da0c0822-cdac-4a26-80c6-e53e53138a39!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-470148 -n functional-470148
helpers_test.go:261: (dbg) Run: kubectl --context functional-470148 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/ComponentHealth FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ComponentHealth (1.87s)