=== RUN TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run: kubectl --context functional-377836 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:829: kube-apiserver is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:True} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.219 PodIP:192.168.39.219 StartTime:2024-07-03 22:57:48 +0000 UTC ContainerStatuses:[{Name:kube-apiserver State:{Waiting:<nil> Running:0xc001f0f068 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:registry.k8s.io/kube-apiserver:v1.30.2 ImageID:docker-pullable://registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d ContainerID:docker://0790dd5ddc5ea977a68ed1752c2402bd2edd431104d0d2889326b8b61e057862}]}
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-377836 -n functional-377836
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p functional-377836 logs -n 25
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| unpause | nospam-147129 --log_dir | nospam-147129 | jenkins | v1.33.1 | 03 Jul 24 22:53 UTC | 03 Jul 24 22:53 UTC |
| | /tmp/nospam-147129 unpause | | | | | |
| unpause | nospam-147129 --log_dir | nospam-147129 | jenkins | v1.33.1 | 03 Jul 24 22:53 UTC | 03 Jul 24 22:53 UTC |
| | /tmp/nospam-147129 unpause | | | | | |
| unpause | nospam-147129 --log_dir | nospam-147129 | jenkins | v1.33.1 | 03 Jul 24 22:53 UTC | 03 Jul 24 22:53 UTC |
| | /tmp/nospam-147129 unpause | | | | | |
| stop | nospam-147129 --log_dir | nospam-147129 | jenkins | v1.33.1 | 03 Jul 24 22:53 UTC | 03 Jul 24 22:53 UTC |
| | /tmp/nospam-147129 stop | | | | | |
| stop | nospam-147129 --log_dir | nospam-147129 | jenkins | v1.33.1 | 03 Jul 24 22:53 UTC | 03 Jul 24 22:53 UTC |
| | /tmp/nospam-147129 stop | | | | | |
| stop | nospam-147129 --log_dir | nospam-147129 | jenkins | v1.33.1 | 03 Jul 24 22:53 UTC | 03 Jul 24 22:53 UTC |
| | /tmp/nospam-147129 stop | | | | | |
| delete | -p nospam-147129 | nospam-147129 | jenkins | v1.33.1 | 03 Jul 24 22:53 UTC | 03 Jul 24 22:53 UTC |
| start | -p functional-377836 | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:53 UTC | 03 Jul 24 22:55 UTC |
| | --memory=4000 | | | | | |
| | --apiserver-port=8441 | | | | | |
| | --wait=all --driver=kvm2 | | | | | |
| start | -p functional-377836 | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | 03 Jul 24 22:55 UTC |
| | --alsologtostderr -v=8 | | | | | |
| cache | functional-377836 cache add | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | 03 Jul 24 22:55 UTC |
| | registry.k8s.io/pause:3.1 | | | | | |
| cache | functional-377836 cache add | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | 03 Jul 24 22:55 UTC |
| | registry.k8s.io/pause:3.3 | | | | | |
| cache | functional-377836 cache add | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | 03 Jul 24 22:55 UTC |
| | registry.k8s.io/pause:latest | | | | | |
| cache | functional-377836 cache add | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | 03 Jul 24 22:55 UTC |
| | minikube-local-cache-test:functional-377836 | | | | | |
| cache | functional-377836 cache delete | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | 03 Jul 24 22:55 UTC |
| | minikube-local-cache-test:functional-377836 | | | | | |
| cache | delete | minikube | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | 03 Jul 24 22:55 UTC |
| | registry.k8s.io/pause:3.3 | | | | | |
| cache | list | minikube | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | 03 Jul 24 22:55 UTC |
| ssh | functional-377836 ssh sudo | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | 03 Jul 24 22:55 UTC |
| | crictl images | | | | | |
| ssh | functional-377836 | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | 03 Jul 24 22:55 UTC |
| | ssh sudo docker rmi | | | | | |
| | registry.k8s.io/pause:latest | | | | | |
| ssh | functional-377836 ssh | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | |
| | sudo crictl inspecti | | | | | |
| | registry.k8s.io/pause:latest | | | | | |
| cache | functional-377836 cache reload | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | 03 Jul 24 22:55 UTC |
| ssh | functional-377836 ssh | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:56 UTC | 03 Jul 24 22:56 UTC |
| | sudo crictl inspecti | | | | | |
| | registry.k8s.io/pause:latest | | | | | |
| cache | delete | minikube | jenkins | v1.33.1 | 03 Jul 24 22:56 UTC | 03 Jul 24 22:56 UTC |
| | registry.k8s.io/pause:3.1 | | | | | |
| cache | delete | minikube | jenkins | v1.33.1 | 03 Jul 24 22:56 UTC | 03 Jul 24 22:56 UTC |
| | registry.k8s.io/pause:latest | | | | | |
| kubectl | functional-377836 kubectl -- | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:56 UTC | 03 Jul 24 22:56 UTC |
| | --context functional-377836 | | | | | |
| | get pods | | | | | |
| start | -p functional-377836 | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:56 UTC | 03 Jul 24 22:57 UTC |
| | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision | | | | | |
| | --wait=all | | | | | |
|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/07/03 22:56:00
Running on machine: ubuntu-20-agent-3
Binary: Built with gc go1.22.4 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0703 22:56:00.510702 22400 out.go:291] Setting OutFile to fd 1 ...
I0703 22:56:00.510928 22400 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 22:56:00.510931 22400 out.go:304] Setting ErrFile to fd 2...
I0703 22:56:00.510934 22400 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 22:56:00.511089 22400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9391/.minikube/bin
I0703 22:56:00.511579 22400 out.go:298] Setting JSON to false
I0703 22:56:00.512393 22400 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2305,"bootTime":1720045055,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0703 22:56:00.512467 22400 start.go:139] virtualization: kvm guest
I0703 22:56:00.514487 22400 out.go:177] * [functional-377836] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
I0703 22:56:00.515747 22400 out.go:177] - MINIKUBE_LOCATION=18998
I0703 22:56:00.515754 22400 notify.go:220] Checking for updates...
I0703 22:56:00.518152 22400 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0703 22:56:00.519330 22400 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/18998-9391/kubeconfig
I0703 22:56:00.520495 22400 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9391/.minikube
I0703 22:56:00.521611 22400 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0703 22:56:00.522783 22400 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0703 22:56:00.524220 22400 config.go:182] Loaded profile config "functional-377836": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 22:56:00.524282 22400 driver.go:392] Setting default libvirt URI to qemu:///system
I0703 22:56:00.524703 22400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0703 22:56:00.524750 22400 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 22:56:00.539191 22400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41463
I0703 22:56:00.539530 22400 main.go:141] libmachine: () Calling .GetVersion
I0703 22:56:00.540031 22400 main.go:141] libmachine: Using API Version 1
I0703 22:56:00.540044 22400 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 22:56:00.540405 22400 main.go:141] libmachine: () Calling .GetMachineName
I0703 22:56:00.540561 22400 main.go:141] libmachine: (functional-377836) Calling .DriverName
I0703 22:56:00.570317 22400 out.go:177] * Using the kvm2 driver based on existing profile
I0703 22:56:00.571392 22400 start.go:297] selected driver: kvm2
I0703 22:56:00.571398 22400 start.go:901] validating driver "kvm2" against &{Name:functional-377836 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-377836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0703 22:56:00.571491 22400 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0703 22:56:00.571790 22400 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0703 22:56:00.571837 22400 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0703 22:56:00.585798 22400 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
I0703 22:56:00.586484 22400 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0703 22:56:00.586534 22400 cni.go:84] Creating CNI manager for ""
I0703 22:56:00.586545 22400 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0703 22:56:00.586593 22400 start.go:340] cluster config:
{Name:functional-377836 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-377836 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0703 22:56:00.586682 22400 iso.go:125] acquiring lock: {Name:mke39b31a4a84d7efedf67d51c801ff7cd79d25d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0703 22:56:00.588567 22400 out.go:177] * Starting "functional-377836" primary control-plane node in "functional-377836" cluster
I0703 22:56:00.589544 22400 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0703 22:56:00.589568 22400 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
I0703 22:56:00.589573 22400 cache.go:56] Caching tarball of preloaded images
I0703 22:56:00.589645 22400 preload.go:173] Found /home/jenkins/minikube-integration/18998-9391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0703 22:56:00.589650 22400 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0703 22:56:00.589724 22400 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/config.json ...
I0703 22:56:00.589898 22400 start.go:360] acquireMachinesLock for functional-377836: {Name:mk0c7b3619f676bfb46d9cc345dd57d32a1f7d69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0703 22:56:00.589935 22400 start.go:364] duration metric: took 27.079µs to acquireMachinesLock for "functional-377836"
I0703 22:56:00.589944 22400 start.go:96] Skipping create...Using existing machine configuration
I0703 22:56:00.589951 22400 fix.go:54] fixHost starting:
I0703 22:56:00.590201 22400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0703 22:56:00.590231 22400 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 22:56:00.603145 22400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
I0703 22:56:00.603508 22400 main.go:141] libmachine: () Calling .GetVersion
I0703 22:56:00.603939 22400 main.go:141] libmachine: Using API Version 1
I0703 22:56:00.603953 22400 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 22:56:00.604209 22400 main.go:141] libmachine: () Calling .GetMachineName
I0703 22:56:00.604345 22400 main.go:141] libmachine: (functional-377836) Calling .DriverName
I0703 22:56:00.604472 22400 main.go:141] libmachine: (functional-377836) Calling .GetState
I0703 22:56:00.605840 22400 fix.go:112] recreateIfNeeded on functional-377836: state=Running err=<nil>
W0703 22:56:00.605853 22400 fix.go:138] unexpected machine state, will restart: <nil>
I0703 22:56:00.607164 22400 out.go:177] * Updating the running kvm2 "functional-377836" VM ...
I0703 22:56:00.608157 22400 machine.go:94] provisionDockerMachine start ...
I0703 22:56:00.608166 22400 main.go:141] libmachine: (functional-377836) Calling .DriverName
I0703 22:56:00.608319 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
I0703 22:56:00.610313 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:00.610589 22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
I0703 22:56:00.610610 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:00.610791 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
I0703 22:56:00.610920 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:56:00.611041 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:56:00.611149 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
I0703 22:56:00.611250 22400 main.go:141] libmachine: Using SSH client type: native
I0703 22:56:00.611406 22400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.219 22 <nil> <nil>}
I0703 22:56:00.611411 22400 main.go:141] libmachine: About to run SSH command:
hostname
I0703 22:56:00.717240 22400 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-377836
I0703 22:56:00.717255 22400 main.go:141] libmachine: (functional-377836) Calling .GetMachineName
I0703 22:56:00.717481 22400 buildroot.go:166] provisioning hostname "functional-377836"
I0703 22:56:00.717496 22400 main.go:141] libmachine: (functional-377836) Calling .GetMachineName
I0703 22:56:00.717647 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
I0703 22:56:00.720132 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:00.720444 22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
I0703 22:56:00.720462 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:00.720574 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
I0703 22:56:00.720745 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:56:00.720859 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:56:00.720987 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
I0703 22:56:00.721117 22400 main.go:141] libmachine: Using SSH client type: native
I0703 22:56:00.721291 22400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.219 22 <nil> <nil>}
I0703 22:56:00.721300 22400 main.go:141] libmachine: About to run SSH command:
sudo hostname functional-377836 && echo "functional-377836" | sudo tee /etc/hostname
I0703 22:56:00.840292 22400 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-377836
I0703 22:56:00.840330 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
I0703 22:56:00.842697 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:00.843009 22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
I0703 22:56:00.843023 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:00.843185 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
I0703 22:56:00.843343 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:56:00.843459 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:56:00.843617 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
I0703 22:56:00.843724 22400 main.go:141] libmachine: Using SSH client type: native
I0703 22:56:00.843870 22400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.219 22 <nil> <nil>}
I0703 22:56:00.843880 22400 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-377836' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-377836/g' /etc/hosts;
else
echo '127.0.1.1 functional-377836' | sudo tee -a /etc/hosts;
fi
fi
I0703 22:56:00.949561 22400 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0703 22:56:00.949576 22400 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9391/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9391/.minikube}
I0703 22:56:00.949600 22400 buildroot.go:174] setting up certificates
I0703 22:56:00.949607 22400 provision.go:84] configureAuth start
I0703 22:56:00.949614 22400 main.go:141] libmachine: (functional-377836) Calling .GetMachineName
I0703 22:56:00.949829 22400 main.go:141] libmachine: (functional-377836) Calling .GetIP
I0703 22:56:00.952036 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:00.952422 22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
I0703 22:56:00.952458 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:00.952488 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
I0703 22:56:00.954553 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:00.954814 22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
I0703 22:56:00.954838 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:00.954966 22400 provision.go:143] copyHostCerts
I0703 22:56:00.955013 22400 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9391/.minikube/ca.pem, removing ...
I0703 22:56:00.955019 22400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9391/.minikube/ca.pem
I0703 22:56:00.955091 22400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9391/.minikube/ca.pem (1082 bytes)
I0703 22:56:00.955191 22400 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9391/.minikube/cert.pem, removing ...
I0703 22:56:00.955196 22400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9391/.minikube/cert.pem
I0703 22:56:00.955232 22400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9391/.minikube/cert.pem (1123 bytes)
I0703 22:56:00.955295 22400 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9391/.minikube/key.pem, removing ...
I0703 22:56:00.955300 22400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9391/.minikube/key.pem
I0703 22:56:00.955325 22400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9391/.minikube/key.pem (1675 bytes)
I0703 22:56:00.955380 22400 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9391/.minikube/certs/ca-key.pem org=jenkins.functional-377836 san=[127.0.0.1 192.168.39.219 functional-377836 localhost minikube]
I0703 22:56:01.131586 22400 provision.go:177] copyRemoteCerts
I0703 22:56:01.131631 22400 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0703 22:56:01.131655 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
I0703 22:56:01.134435 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:01.134767 22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
I0703 22:56:01.134786 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:01.134948 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
I0703 22:56:01.135121 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:56:01.135284 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
I0703 22:56:01.135412 22400 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/functional-377836/id_rsa Username:docker}
I0703 22:56:01.215216 22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0703 22:56:01.240412 22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0703 22:56:01.265081 22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0703 22:56:01.288832 22400 provision.go:87] duration metric: took 339.215018ms to configureAuth
I0703 22:56:01.288850 22400 buildroot.go:189] setting minikube options for container-runtime
I0703 22:56:01.289059 22400 config.go:182] Loaded profile config "functional-377836": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 22:56:01.289075 22400 main.go:141] libmachine: (functional-377836) Calling .DriverName
I0703 22:56:01.289337 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
I0703 22:56:01.291471 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:01.291798 22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
I0703 22:56:01.291827 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:01.291910 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
I0703 22:56:01.292093 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:56:01.292242 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:56:01.292387 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
I0703 22:56:01.292529 22400 main.go:141] libmachine: Using SSH client type: native
I0703 22:56:01.292665 22400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.219 22 <nil> <nil>}
I0703 22:56:01.292670 22400 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0703 22:56:01.398770 22400 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0703 22:56:01.398780 22400 buildroot.go:70] root file system type: tmpfs
I0703 22:56:01.398881 22400 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0703 22:56:01.398897 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
I0703 22:56:01.401565 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:01.401882 22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
I0703 22:56:01.401916 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:01.402064 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
I0703 22:56:01.402196 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:56:01.402338 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:56:01.402405 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
I0703 22:56:01.402497 22400 main.go:141] libmachine: Using SSH client type: native
I0703 22:56:01.402677 22400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.219 22 <nil> <nil>}
I0703 22:56:01.402731 22400 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0703 22:56:01.535248 22400 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0703 22:56:01.535278 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
I0703 22:56:01.537572 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:01.537901 22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
I0703 22:56:01.537915 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:01.538056 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
I0703 22:56:01.538198 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:56:01.538343 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:56:01.538482 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
I0703 22:56:01.538602 22400 main.go:141] libmachine: Using SSH client type: native
I0703 22:56:01.538814 22400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.219 22 <nil> <nil>}
I0703 22:56:01.538829 22400 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0703 22:56:01.647612 22400 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0703 22:56:01.647634 22400 machine.go:97] duration metric: took 1.039471957s to provisionDockerMachine
I0703 22:56:01.647642 22400 start.go:293] postStartSetup for "functional-377836" (driver="kvm2")
I0703 22:56:01.647649 22400 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0703 22:56:01.647661 22400 main.go:141] libmachine: (functional-377836) Calling .DriverName
I0703 22:56:01.647931 22400 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0703 22:56:01.647947 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
I0703 22:56:01.650716 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:01.651035 22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
I0703 22:56:01.651056 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:01.651191 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
I0703 22:56:01.651382 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:56:01.651516 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
I0703 22:56:01.651648 22400 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/functional-377836/id_rsa Username:docker}
I0703 22:56:01.738442 22400 ssh_runner.go:195] Run: cat /etc/os-release
I0703 22:56:01.743220 22400 info.go:137] Remote host: Buildroot 2023.02.9
I0703 22:56:01.743233 22400 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9391/.minikube/addons for local assets ...
I0703 22:56:01.743297 22400 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9391/.minikube/files for local assets ...
I0703 22:56:01.743357 22400 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9391/.minikube/files/etc/ssl/certs/166762.pem -> 166762.pem in /etc/ssl/certs
I0703 22:56:01.743417 22400 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9391/.minikube/files/etc/test/nested/copy/16676/hosts -> hosts in /etc/test/nested/copy/16676
I0703 22:56:01.743445 22400 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/16676
I0703 22:56:01.754934 22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/files/etc/ssl/certs/166762.pem --> /etc/ssl/certs/166762.pem (1708 bytes)
I0703 22:56:01.783656 22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/files/etc/test/nested/copy/16676/hosts --> /etc/test/nested/copy/16676/hosts (40 bytes)
I0703 22:56:01.813227 22400 start.go:296] duration metric: took 165.576258ms for postStartSetup
I0703 22:56:01.813249 22400 fix.go:56] duration metric: took 1.223301149s for fixHost
I0703 22:56:01.813264 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
I0703 22:56:01.816280 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:01.816637 22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
I0703 22:56:01.816660 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:01.816808 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
I0703 22:56:01.816965 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:56:01.817113 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:56:01.817251 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
I0703 22:56:01.817388 22400 main.go:141] libmachine: Using SSH client type: native
I0703 22:56:01.817534 22400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.219 22 <nil> <nil>}
I0703 22:56:01.817539 22400 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0703 22:56:01.921633 22400 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720047361.898144190
I0703 22:56:01.921649 22400 fix.go:216] guest clock: 1720047361.898144190
I0703 22:56:01.921657 22400 fix.go:229] Guest: 2024-07-03 22:56:01.89814419 +0000 UTC Remote: 2024-07-03 22:56:01.813250822 +0000 UTC m=+1.336205740 (delta=84.893368ms)
I0703 22:56:01.921693 22400 fix.go:200] guest clock delta is within tolerance: 84.893368ms
I0703 22:56:01.921699 22400 start.go:83] releasing machines lock for "functional-377836", held for 1.331758498s
I0703 22:56:01.921725 22400 main.go:141] libmachine: (functional-377836) Calling .DriverName
I0703 22:56:01.921996 22400 main.go:141] libmachine: (functional-377836) Calling .GetIP
I0703 22:56:01.924305 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:01.924629 22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
I0703 22:56:01.924644 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:01.924760 22400 main.go:141] libmachine: (functional-377836) Calling .DriverName
I0703 22:56:01.925216 22400 main.go:141] libmachine: (functional-377836) Calling .DriverName
I0703 22:56:01.925391 22400 main.go:141] libmachine: (functional-377836) Calling .DriverName
I0703 22:56:01.925471 22400 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0703 22:56:01.925520 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
I0703 22:56:01.925578 22400 ssh_runner.go:195] Run: cat /version.json
I0703 22:56:01.925593 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
I0703 22:56:01.927832 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:01.928115 22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
I0703 22:56:01.928143 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:01.928218 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:01.928259 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
I0703 22:56:01.928426 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:56:01.928561 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
I0703 22:56:01.928614 22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
I0703 22:56:01.928630 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:01.928673 22400 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/functional-377836/id_rsa Username:docker}
I0703 22:56:01.928804 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
I0703 22:56:01.928949 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:56:01.929092 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
I0703 22:56:01.929247 22400 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/functional-377836/id_rsa Username:docker}
I0703 22:56:02.025188 22400 ssh_runner.go:195] Run: systemctl --version
I0703 22:56:02.031157 22400 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0703 22:56:02.037051 22400 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0703 22:56:02.037091 22400 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0703 22:56:02.046401 22400 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0703 22:56:02.046415 22400 start.go:494] detecting cgroup driver to use...
I0703 22:56:02.046513 22400 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0703 22:56:02.065422 22400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0703 22:56:02.076869 22400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0703 22:56:02.087527 22400 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0703 22:56:02.087568 22400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0703 22:56:02.103875 22400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0703 22:56:02.113888 22400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0703 22:56:02.123993 22400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0703 22:56:02.134193 22400 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0703 22:56:02.145197 22400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0703 22:56:02.155637 22400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0703 22:56:02.166050 22400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0703 22:56:02.176160 22400 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0703 22:56:02.185582 22400 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0703 22:56:02.195001 22400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0703 22:56:02.377559 22400 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0703 22:56:02.403790 22400 start.go:494] detecting cgroup driver to use...
I0703 22:56:02.403849 22400 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0703 22:56:02.421212 22400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0703 22:56:02.436479 22400 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0703 22:56:02.459419 22400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0703 22:56:02.474687 22400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0703 22:56:02.487208 22400 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0703 22:56:02.505209 22400 ssh_runner.go:195] Run: which cri-dockerd
I0703 22:56:02.508898 22400 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0703 22:56:02.517695 22400 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0703 22:56:02.533976 22400 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0703 22:56:02.690896 22400 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0703 22:56:02.855888 22400 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0703 22:56:02.855988 22400 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0703 22:56:02.873313 22400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0703 22:56:03.029389 22400 ssh_runner.go:195] Run: sudo systemctl restart docker
I0703 22:56:15.723723 22400 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.694306107s)
I0703 22:56:15.723786 22400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0703 22:56:15.740703 22400 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
I0703 22:56:15.764390 22400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0703 22:56:15.777109 22400 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0703 22:56:15.894738 22400 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0703 22:56:16.026121 22400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0703 22:56:16.159765 22400 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0703 22:56:16.176948 22400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0703 22:56:16.189646 22400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0703 22:56:16.307121 22400 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0703 22:56:16.411260 22400 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0703 22:56:16.411322 22400 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0703 22:56:16.417973 22400 start.go:562] Will wait 60s for crictl version
I0703 22:56:16.418002 22400 ssh_runner.go:195] Run: which crictl
I0703 22:56:16.423655 22400 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0703 22:56:16.459234 22400 start.go:578] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.0.3
RuntimeApiVersion: v1
I0703 22:56:16.459290 22400 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0703 22:56:16.480430 22400 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0703 22:56:16.502918 22400 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
I0703 22:56:16.502958 22400 main.go:141] libmachine: (functional-377836) Calling .GetIP
I0703 22:56:16.505637 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:16.505995 22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
I0703 22:56:16.506011 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:16.506178 22400 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0703 22:56:16.511629 22400 out.go:177] - apiserver.enable-admission-plugins=NamespaceAutoProvision
I0703 22:56:16.512955 22400 kubeadm.go:877] updating cluster {Name:functional-377836 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:functional-377836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0703 22:56:16.513046 22400 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0703 22:56:16.513085 22400 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0703 22:56:16.530936 22400 docker.go:685] Got preloaded images: -- stdout --
minikube-local-cache-test:functional-377836
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/pause:latest
-- /stdout --
I0703 22:56:16.530943 22400 docker.go:615] Images already preloaded, skipping extraction
I0703 22:56:16.530975 22400 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0703 22:56:16.548145 22400 docker.go:685] Got preloaded images: -- stdout --
minikube-local-cache-test:functional-377836
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/pause:latest
-- /stdout --
I0703 22:56:16.548152 22400 cache_images.go:84] Images are preloaded, skipping loading
I0703 22:56:16.548158 22400 kubeadm.go:928] updating node { 192.168.39.219 8441 v1.30.2 docker true true} ...
I0703 22:56:16.548246 22400 kubeadm.go:940] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-377836 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.219
[Install]
config:
{KubernetesVersion:v1.30.2 ClusterName:functional-377836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0703 22:56:16.548284 22400 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0703 22:56:16.573639 22400 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
I0703 22:56:16.573694 22400 cni.go:84] Creating CNI manager for ""
I0703 22:56:16.573707 22400 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0703 22:56:16.573714 22400 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0703 22:56:16.573730 22400 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.219 APIServerPort:8441 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-377836 NodeName:functional-377836 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.219"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.219 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0703 22:56:16.573853 22400 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.219
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "functional-377836"
kubeletExtraArgs:
node-ip: 192.168.39.219
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.219"]
extraArgs:
enable-admission-plugins: "NamespaceAutoProvision"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.30.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0703 22:56:16.573890 22400 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
I0703 22:56:16.583317 22400 binaries.go:44] Found k8s binaries, skipping transfer
I0703 22:56:16.583359 22400 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0703 22:56:16.592495 22400 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
I0703 22:56:16.608332 22400 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0703 22:56:16.624483 22400 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2015 bytes)
I0703 22:56:16.639987 22400 ssh_runner.go:195] Run: grep 192.168.39.219 control-plane.minikube.internal$ /etc/hosts
I0703 22:56:16.643649 22400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0703 22:56:16.779958 22400 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0703 22:56:16.836127 22400 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836 for IP: 192.168.39.219
I0703 22:56:16.836138 22400 certs.go:194] generating shared ca certs ...
I0703 22:56:16.836158 22400 certs.go:226] acquiring lock for ca certs: {Name:mkf6614f3bbac218620dd9f7f5d0832f57cc4a9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0703 22:56:16.836311 22400 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9391/.minikube/ca.key
I0703 22:56:16.836344 22400 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9391/.minikube/proxy-client-ca.key
I0703 22:56:16.836349 22400 certs.go:256] generating profile certs ...
I0703 22:56:16.836445 22400 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.key
I0703 22:56:16.836499 22400 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/apiserver.key.656cd1b8
I0703 22:56:16.836545 22400 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/proxy-client.key
I0703 22:56:16.836649 22400 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9391/.minikube/certs/16676.pem (1338 bytes)
W0703 22:56:16.836670 22400 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9391/.minikube/certs/16676_empty.pem, impossibly tiny 0 bytes
I0703 22:56:16.836676 22400 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9391/.minikube/certs/ca-key.pem (1679 bytes)
I0703 22:56:16.836696 22400 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9391/.minikube/certs/ca.pem (1082 bytes)
I0703 22:56:16.836712 22400 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9391/.minikube/certs/cert.pem (1123 bytes)
I0703 22:56:16.836728 22400 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9391/.minikube/certs/key.pem (1675 bytes)
I0703 22:56:16.836757 22400 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9391/.minikube/files/etc/ssl/certs/166762.pem (1708 bytes)
I0703 22:56:16.837361 22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0703 22:56:16.911461 22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0703 22:56:16.983431 22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0703 22:56:17.038715 22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0703 22:56:17.082551 22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0703 22:56:17.144530 22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0703 22:56:17.189293 22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0703 22:56:17.229606 22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0703 22:56:17.263630 22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/certs/16676.pem --> /usr/share/ca-certificates/16676.pem (1338 bytes)
I0703 22:56:17.325075 22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/files/etc/ssl/certs/166762.pem --> /usr/share/ca-certificates/166762.pem (1708 bytes)
I0703 22:56:17.365727 22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0703 22:56:17.400421 22400 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0703 22:56:17.425695 22400 ssh_runner.go:195] Run: openssl version
I0703 22:56:17.432255 22400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16676.pem && ln -fs /usr/share/ca-certificates/16676.pem /etc/ssl/certs/16676.pem"
I0703 22:56:17.446984 22400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16676.pem
I0703 22:56:17.452267 22400 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 3 22:53 /usr/share/ca-certificates/16676.pem
I0703 22:56:17.452312 22400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16676.pem
I0703 22:56:17.462306 22400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16676.pem /etc/ssl/certs/51391683.0"
I0703 22:56:17.485079 22400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166762.pem && ln -fs /usr/share/ca-certificates/166762.pem /etc/ssl/certs/166762.pem"
I0703 22:56:17.510278 22400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166762.pem
I0703 22:56:17.519901 22400 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 3 22:53 /usr/share/ca-certificates/166762.pem
I0703 22:56:17.519934 22400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166762.pem
I0703 22:56:17.525937 22400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166762.pem /etc/ssl/certs/3ec20f2e.0"
I0703 22:56:17.544227 22400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0703 22:56:17.562056 22400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0703 22:56:17.568131 22400 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 3 22:47 /usr/share/ca-certificates/minikubeCA.pem
I0703 22:56:17.568157 22400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0703 22:56:17.584913 22400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0703 22:56:17.610103 22400 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0703 22:56:17.620833 22400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0703 22:56:17.629374 22400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0703 22:56:17.654991 22400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0703 22:56:17.672985 22400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0703 22:56:17.694093 22400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0703 22:56:17.702460 22400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0703 22:56:17.709716 22400 kubeadm.go:391] StartCluster: {Name:functional-377836 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:functional-377836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Moun
tString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0703 22:56:17.709856 22400 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0703 22:56:17.728429 22400 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
W0703 22:56:17.741501 22400 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
I0703 22:56:17.741512 22400 kubeadm.go:407] found existing configuration files, will attempt cluster restart
I0703 22:56:17.741517 22400 kubeadm.go:587] restartPrimaryControlPlane start ...
I0703 22:56:17.741561 22400 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0703 22:56:17.761221 22400 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0703 22:56:17.761720 22400 kubeconfig.go:125] found "functional-377836" server: "https://192.168.39.219:8441"
I0703 22:56:17.762775 22400 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0703 22:56:17.776283 22400 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml
+++ /var/tmp/minikube/kubeadm.yaml.new
@@ -22,7 +22,7 @@
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.219"]
extraArgs:
- enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
+ enable-admission-plugins: "NamespaceAutoProvision"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
-- /stdout --
I0703 22:56:17.776289 22400 kubeadm.go:1154] stopping kube-system containers ...
I0703 22:56:17.776326 22400 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0703 22:56:17.839868 22400 docker.go:483] Stopping containers: [71c0b16f3679 b406ab73e5d2 d73ef8b96e2c f2953d08dacd fb0a93e6301f 757aac77b242 d19de58b969d 393afc286228 3720e138f218 d1d00023893a 1bf7856e6cc2 c307e26931c5 3180d83316a4 f8797e419a2e 5e029f1d16e3 81f88c33387f 35b39ff49ecf 59d9adb16464 f9cd97d184ab ede261f839ee 043bf1536424 f5f25bace2d8 28ad2448e774 d1dc51ed1398 c02af4c53647 40145ee83aa4 800da21bd3bc a169fa02b113 08541cc36205 a58b3f662ce2]
I0703 22:56:17.839942 22400 ssh_runner.go:195] Run: docker stop 71c0b16f3679 b406ab73e5d2 d73ef8b96e2c f2953d08dacd fb0a93e6301f 757aac77b242 d19de58b969d 393afc286228 3720e138f218 d1d00023893a 1bf7856e6cc2 c307e26931c5 3180d83316a4 f8797e419a2e 5e029f1d16e3 81f88c33387f 35b39ff49ecf 59d9adb16464 f9cd97d184ab ede261f839ee 043bf1536424 f5f25bace2d8 28ad2448e774 d1dc51ed1398 c02af4c53647 40145ee83aa4 800da21bd3bc a169fa02b113 08541cc36205 a58b3f662ce2
I0703 22:56:18.447345 22400 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0703 22:56:18.491875 22400 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0703 22:56:18.502739 22400 kubeadm.go:156] found existing configuration files:
-rw------- 1 root root 5647 Jul 3 22:54 /etc/kubernetes/admin.conf
-rw------- 1 root root 5658 Jul 3 22:55 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2007 Jul 3 22:54 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5606 Jul 3 22:55 /etc/kubernetes/scheduler.conf
I0703 22:56:18.502786 22400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I0703 22:56:18.513009 22400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I0703 22:56:18.522810 22400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I0703 22:56:18.533077 22400 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0703 22:56:18.533105 22400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0703 22:56:18.546260 22400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I0703 22:56:18.555250 22400 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0703 22:56:18.555284 22400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0703 22:56:18.566055 22400 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0703 22:56:18.579386 22400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0703 22:56:18.633477 22400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0703 22:56:19.554513 22400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0703 22:56:19.760922 22400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0703 22:56:19.850105 22400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0703 22:56:19.976563 22400 api_server.go:52] waiting for apiserver process to appear ...
I0703 22:56:19.976641 22400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0703 22:56:20.477503 22400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0703 22:56:20.977473 22400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0703 22:56:21.477069 22400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0703 22:56:21.491782 22400 api_server.go:72] duration metric: took 1.515222902s to wait for apiserver process to appear ...
I0703 22:56:21.491794 22400 api_server.go:88] waiting for apiserver healthz status ...
I0703 22:56:21.491809 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:24.202043 22400 api_server.go:279] https://192.168.39.219:8441/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0703 22:56:24.202062 22400 api_server.go:103] status: https://192.168.39.219:8441/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0703 22:56:24.202073 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:24.223359 22400 api_server.go:279] https://192.168.39.219:8441/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0703 22:56:24.223376 22400 api_server.go:103] status: https://192.168.39.219:8441/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0703 22:56:24.492737 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:24.500880 22400 api_server.go:279] https://192.168.39.219:8441/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0703 22:56:24.500900 22400 api_server.go:103] status: https://192.168.39.219:8441/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0703 22:56:24.992493 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:25.001533 22400 api_server.go:279] https://192.168.39.219:8441/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0703 22:56:25.001546 22400 api_server.go:103] status: https://192.168.39.219:8441/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0703 22:56:25.492111 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:25.515234 22400 api_server.go:279] https://192.168.39.219:8441/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0703 22:56:25.515249 22400 api_server.go:103] status: https://192.168.39.219:8441/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0703 22:56:25.992563 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:25.997115 22400 api_server.go:279] https://192.168.39.219:8441/healthz returned 200:
ok
I0703 22:56:26.009758 22400 api_server.go:141] control plane version: v1.30.2
I0703 22:56:26.009773 22400 api_server.go:131] duration metric: took 4.51797488s to wait for apiserver health ...
I0703 22:56:26.009780 22400 cni.go:84] Creating CNI manager for ""
I0703 22:56:26.009789 22400 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0703 22:56:26.011581 22400 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0703 22:56:26.012905 22400 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0703 22:56:26.029812 22400 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0703 22:56:26.053790 22400 system_pods.go:43] waiting for kube-system pods to appear ...
I0703 22:56:26.062249 22400 system_pods.go:59] 7 kube-system pods found
I0703 22:56:26.062264 22400 system_pods.go:61] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0703 22:56:26.062273 22400 system_pods.go:61] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0703 22:56:26.062279 22400 system_pods.go:61] "kube-apiserver-functional-377836" [80bc54ed-3e0b-40c2-9e36-5889e4c30b1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0703 22:56:26.062284 22400 system_pods.go:61] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0703 22:56:26.062287 22400 system_pods.go:61] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
I0703 22:56:26.062290 22400 system_pods.go:61] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0703 22:56:26.062293 22400 system_pods.go:61] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
I0703 22:56:26.062297 22400 system_pods.go:74] duration metric: took 8.496972ms to wait for pod list to return data ...
I0703 22:56:26.062302 22400 node_conditions.go:102] verifying NodePressure condition ...
I0703 22:56:26.065106 22400 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0703 22:56:26.065125 22400 node_conditions.go:123] node cpu capacity is 2
I0703 22:56:26.065135 22400 node_conditions.go:105] duration metric: took 2.828996ms to run NodePressure ...
I0703 22:56:26.065151 22400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0703 22:56:26.362018 22400 kubeadm.go:718] waiting for restarted kubelet to initialise ...
I0703 22:56:26.373607 22400 kubeadm.go:733] kubelet initialised
I0703 22:56:26.373615 22400 kubeadm.go:734] duration metric: took 11.58403ms waiting for restarted kubelet to initialise ...
I0703 22:56:26.373621 22400 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0703 22:56:26.380076 22400 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-4w94w" in "kube-system" namespace to be "Ready" ...
I0703 22:56:28.385303 22400 pod_ready.go:102] pod "coredns-7db6d8ff4d-4w94w" in "kube-system" namespace has status "Ready":"False"
I0703 22:56:30.386552 22400 pod_ready.go:102] pod "coredns-7db6d8ff4d-4w94w" in "kube-system" namespace has status "Ready":"False"
I0703 22:56:32.885744 22400 pod_ready.go:102] pod "coredns-7db6d8ff4d-4w94w" in "kube-system" namespace has status "Ready":"False"
I0703 22:56:33.387117 22400 pod_ready.go:92] pod "coredns-7db6d8ff4d-4w94w" in "kube-system" namespace has status "Ready":"True"
I0703 22:56:33.387126 22400 pod_ready.go:81] duration metric: took 7.007040826s for pod "coredns-7db6d8ff4d-4w94w" in "kube-system" namespace to be "Ready" ...
I0703 22:56:33.387133 22400 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-377836" in "kube-system" namespace to be "Ready" ...
I0703 22:56:34.893301 22400 pod_ready.go:92] pod "etcd-functional-377836" in "kube-system" namespace has status "Ready":"True"
I0703 22:56:34.893312 22400 pod_ready.go:81] duration metric: took 1.506172571s for pod "etcd-functional-377836" in "kube-system" namespace to be "Ready" ...
I0703 22:56:34.893319 22400 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-377836" in "kube-system" namespace to be "Ready" ...
I0703 22:56:36.899460 22400 pod_ready.go:102] pod "kube-apiserver-functional-377836" in "kube-system" namespace has status "Ready":"False"
I0703 22:56:37.899596 22400 pod_ready.go:92] pod "kube-apiserver-functional-377836" in "kube-system" namespace has status "Ready":"True"
I0703 22:56:37.899609 22400 pod_ready.go:81] duration metric: took 3.006283902s for pod "kube-apiserver-functional-377836" in "kube-system" namespace to be "Ready" ...
I0703 22:56:37.899620 22400 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-377836" in "kube-system" namespace to be "Ready" ...
I0703 22:56:38.406186 22400 pod_ready.go:92] pod "kube-controller-manager-functional-377836" in "kube-system" namespace has status "Ready":"True"
I0703 22:56:38.406198 22400 pod_ready.go:81] duration metric: took 506.571414ms for pod "kube-controller-manager-functional-377836" in "kube-system" namespace to be "Ready" ...
I0703 22:56:38.406205 22400 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pgfqk" in "kube-system" namespace to be "Ready" ...
I0703 22:56:38.410515 22400 pod_ready.go:92] pod "kube-proxy-pgfqk" in "kube-system" namespace has status "Ready":"True"
I0703 22:56:38.410523 22400 pod_ready.go:81] duration metric: took 4.313563ms for pod "kube-proxy-pgfqk" in "kube-system" namespace to be "Ready" ...
I0703 22:56:38.410529 22400 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-377836" in "kube-system" namespace to be "Ready" ...
I0703 22:56:38.414284 22400 pod_ready.go:92] pod "kube-scheduler-functional-377836" in "kube-system" namespace has status "Ready":"True"
I0703 22:56:38.414291 22400 pod_ready.go:81] duration metric: took 3.757908ms for pod "kube-scheduler-functional-377836" in "kube-system" namespace to be "Ready" ...
I0703 22:56:38.414298 22400 pod_ready.go:38] duration metric: took 12.04067037s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0703 22:56:38.414311 22400 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0703 22:56:38.426236 22400 ops.go:34] apiserver oom_adj: -16
I0703 22:56:38.426243 22400 kubeadm.go:591] duration metric: took 20.684721892s to restartPrimaryControlPlane
I0703 22:56:38.426249 22400 kubeadm.go:393] duration metric: took 20.716542121s to StartCluster
I0703 22:56:38.426267 22400 settings.go:142] acquiring lock: {Name:mka057d561020f5940ef3b848cb3bd46bcf2236f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0703 22:56:38.426329 22400 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/18998-9391/kubeconfig
I0703 22:56:38.427008 22400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9391/kubeconfig: {Name:mk507e40fb0c0700be4af5efbc43c2602bfaff5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0703 22:56:38.427262 22400 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.219 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0703 22:56:38.427310 22400 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0703 22:56:38.427373 22400 addons.go:69] Setting storage-provisioner=true in profile "functional-377836"
I0703 22:56:38.427400 22400 addons.go:234] Setting addon storage-provisioner=true in "functional-377836"
W0703 22:56:38.427406 22400 addons.go:243] addon storage-provisioner should already be in state true
I0703 22:56:38.427404 22400 config.go:182] Loaded profile config "functional-377836": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 22:56:38.427411 22400 addons.go:69] Setting default-storageclass=true in profile "functional-377836"
I0703 22:56:38.427432 22400 host.go:66] Checking if "functional-377836" exists ...
I0703 22:56:38.427442 22400 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-377836"
I0703 22:56:38.427696 22400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0703 22:56:38.427716 22400 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 22:56:38.427774 22400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0703 22:56:38.427805 22400 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 22:56:38.429062 22400 out.go:177] * Verifying Kubernetes components...
I0703 22:56:38.430289 22400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0703 22:56:38.442081 22400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35731
I0703 22:56:38.442456 22400 main.go:141] libmachine: () Calling .GetVersion
I0703 22:56:38.442941 22400 main.go:141] libmachine: Using API Version 1
I0703 22:56:38.442957 22400 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 22:56:38.443083 22400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43101
I0703 22:56:38.443308 22400 main.go:141] libmachine: () Calling .GetMachineName
I0703 22:56:38.443412 22400 main.go:141] libmachine: () Calling .GetVersion
I0703 22:56:38.443791 22400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0703 22:56:38.443823 22400 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 22:56:38.443861 22400 main.go:141] libmachine: Using API Version 1
I0703 22:56:38.443875 22400 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 22:56:38.444182 22400 main.go:141] libmachine: () Calling .GetMachineName
I0703 22:56:38.444354 22400 main.go:141] libmachine: (functional-377836) Calling .GetState
I0703 22:56:38.446988 22400 addons.go:234] Setting addon default-storageclass=true in "functional-377836"
W0703 22:56:38.446998 22400 addons.go:243] addon default-storageclass should already be in state true
I0703 22:56:38.447023 22400 host.go:66] Checking if "functional-377836" exists ...
I0703 22:56:38.447366 22400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0703 22:56:38.447403 22400 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 22:56:38.458158 22400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36479
I0703 22:56:38.458472 22400 main.go:141] libmachine: () Calling .GetVersion
I0703 22:56:38.458933 22400 main.go:141] libmachine: Using API Version 1
I0703 22:56:38.458949 22400 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 22:56:38.459232 22400 main.go:141] libmachine: () Calling .GetMachineName
I0703 22:56:38.459403 22400 main.go:141] libmachine: (functional-377836) Calling .GetState
I0703 22:56:38.460775 22400 main.go:141] libmachine: (functional-377836) Calling .DriverName
I0703 22:56:38.462605 22400 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0703 22:56:38.463958 22400 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0703 22:56:38.463968 22400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0703 22:56:38.463983 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
I0703 22:56:38.465217 22400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42569
I0703 22:56:38.465639 22400 main.go:141] libmachine: () Calling .GetVersion
I0703 22:56:38.466093 22400 main.go:141] libmachine: Using API Version 1
I0703 22:56:38.466110 22400 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 22:56:38.466372 22400 main.go:141] libmachine: () Calling .GetMachineName
I0703 22:56:38.466613 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:38.466913 22400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0703 22:56:38.466939 22400 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 22:56:38.466979 22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
I0703 22:56:38.466999 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:38.467134 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
I0703 22:56:38.467287 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:56:38.467425 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
I0703 22:56:38.467558 22400 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/functional-377836/id_rsa Username:docker}
I0703 22:56:38.481017 22400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37535
I0703 22:56:38.481392 22400 main.go:141] libmachine: () Calling .GetVersion
I0703 22:56:38.481794 22400 main.go:141] libmachine: Using API Version 1
I0703 22:56:38.481801 22400 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 22:56:38.482100 22400 main.go:141] libmachine: () Calling .GetMachineName
I0703 22:56:38.482261 22400 main.go:141] libmachine: (functional-377836) Calling .GetState
I0703 22:56:38.483741 22400 main.go:141] libmachine: (functional-377836) Calling .DriverName
I0703 22:56:38.483918 22400 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0703 22:56:38.483925 22400 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0703 22:56:38.483935 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
I0703 22:56:38.486462 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:38.486889 22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
I0703 22:56:38.486913 22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:56:38.487015 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
I0703 22:56:38.487168 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:56:38.487292 22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
I0703 22:56:38.487458 22400 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/functional-377836/id_rsa Username:docker}
I0703 22:56:38.623636 22400 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0703 22:56:38.638008 22400 node_ready.go:35] waiting up to 6m0s for node "functional-377836" to be "Ready" ...
I0703 22:56:38.640695 22400 node_ready.go:49] node "functional-377836" has status "Ready":"True"
I0703 22:56:38.640707 22400 node_ready.go:38] duration metric: took 2.678119ms for node "functional-377836" to be "Ready" ...
I0703 22:56:38.640716 22400 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0703 22:56:38.645601 22400 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4w94w" in "kube-system" namespace to be "Ready" ...
I0703 22:56:38.696961 22400 pod_ready.go:92] pod "coredns-7db6d8ff4d-4w94w" in "kube-system" namespace has status "Ready":"True"
I0703 22:56:38.696975 22400 pod_ready.go:81] duration metric: took 51.363862ms for pod "coredns-7db6d8ff4d-4w94w" in "kube-system" namespace to be "Ready" ...
I0703 22:56:38.697000 22400 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-377836" in "kube-system" namespace to be "Ready" ...
I0703 22:56:38.787610 22400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0703 22:56:38.805215 22400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0703 22:56:39.097469 22400 pod_ready.go:92] pod "etcd-functional-377836" in "kube-system" namespace has status "Ready":"True"
I0703 22:56:39.097481 22400 pod_ready.go:81] duration metric: took 400.474207ms for pod "etcd-functional-377836" in "kube-system" namespace to be "Ready" ...
I0703 22:56:39.097489 22400 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-377836" in "kube-system" namespace to be "Ready" ...
I0703 22:56:39.428454 22400 main.go:141] libmachine: Making call to close driver server
I0703 22:56:39.428467 22400 main.go:141] libmachine: (functional-377836) Calling .Close
I0703 22:56:39.428570 22400 main.go:141] libmachine: Making call to close driver server
I0703 22:56:39.428585 22400 main.go:141] libmachine: (functional-377836) Calling .Close
I0703 22:56:39.428769 22400 main.go:141] libmachine: Successfully made call to close driver server
I0703 22:56:39.428780 22400 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 22:56:39.428787 22400 main.go:141] libmachine: Making call to close driver server
I0703 22:56:39.428793 22400 main.go:141] libmachine: (functional-377836) Calling .Close
I0703 22:56:39.428859 22400 main.go:141] libmachine: (functional-377836) DBG | Closing plugin on server side
I0703 22:56:39.428898 22400 main.go:141] libmachine: Successfully made call to close driver server
I0703 22:56:39.428908 22400 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 22:56:39.428920 22400 main.go:141] libmachine: Making call to close driver server
I0703 22:56:39.428926 22400 main.go:141] libmachine: (functional-377836) Calling .Close
I0703 22:56:39.428983 22400 main.go:141] libmachine: (functional-377836) DBG | Closing plugin on server side
I0703 22:56:39.429008 22400 main.go:141] libmachine: Successfully made call to close driver server
I0703 22:56:39.429017 22400 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 22:56:39.429257 22400 main.go:141] libmachine: (functional-377836) DBG | Closing plugin on server side
I0703 22:56:39.429300 22400 main.go:141] libmachine: Successfully made call to close driver server
I0703 22:56:39.429327 22400 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 22:56:39.434894 22400 main.go:141] libmachine: Making call to close driver server
I0703 22:56:39.434902 22400 main.go:141] libmachine: (functional-377836) Calling .Close
I0703 22:56:39.435141 22400 main.go:141] libmachine: Successfully made call to close driver server
I0703 22:56:39.435152 22400 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 22:56:39.435160 22400 main.go:141] libmachine: (functional-377836) DBG | Closing plugin on server side
I0703 22:56:39.437054 22400 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0703 22:56:39.438180 22400 addons.go:510] duration metric: took 1.010874227s for enable addons: enabled=[storage-provisioner default-storageclass]
I0703 22:56:39.496870 22400 pod_ready.go:92] pod "kube-apiserver-functional-377836" in "kube-system" namespace has status "Ready":"True"
I0703 22:56:39.496882 22400 pod_ready.go:81] duration metric: took 399.386622ms for pod "kube-apiserver-functional-377836" in "kube-system" namespace to be "Ready" ...
I0703 22:56:39.496892 22400 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-377836" in "kube-system" namespace to be "Ready" ...
I0703 22:56:40.900223 22400 pod_ready.go:97] node "functional-377836" hosting pod "kube-controller-manager-functional-377836" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "functional-377836": Get "https://192.168.39.219:8441/api/v1/nodes/functional-377836": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:40.900240 22400 pod_ready.go:81] duration metric: took 1.403341811s for pod "kube-controller-manager-functional-377836" in "kube-system" namespace to be "Ready" ...
E0703 22:56:40.900251 22400 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-377836" hosting pod "kube-controller-manager-functional-377836" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "functional-377836": Get "https://192.168.39.219:8441/api/v1/nodes/functional-377836": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:40.900274 22400 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pgfqk" in "kube-system" namespace to be "Ready" ...
I0703 22:56:40.900614 22400 pod_ready.go:97] error getting pod "kube-proxy-pgfqk" in "kube-system" namespace (skipping!): Get "https://192.168.39.219:8441/api/v1/namespaces/kube-system/pods/kube-proxy-pgfqk": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:40.900625 22400 pod_ready.go:81] duration metric: took 344.603µs for pod "kube-proxy-pgfqk" in "kube-system" namespace to be "Ready" ...
E0703 22:56:40.900634 22400 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-pgfqk" in "kube-system" namespace (skipping!): Get "https://192.168.39.219:8441/api/v1/namespaces/kube-system/pods/kube-proxy-pgfqk": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:40.900647 22400 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-377836" in "kube-system" namespace to be "Ready" ...
I0703 22:56:40.901019 22400 pod_ready.go:97] error getting pod "kube-scheduler-functional-377836" in "kube-system" namespace (skipping!): Get "https://192.168.39.219:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-377836": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:40.901029 22400 pod_ready.go:81] duration metric: took 375.908µs for pod "kube-scheduler-functional-377836" in "kube-system" namespace to be "Ready" ...
E0703 22:56:40.901036 22400 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-377836" in "kube-system" namespace (skipping!): Get "https://192.168.39.219:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-377836": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:40.901049 22400 pod_ready.go:38] duration metric: took 2.260323765s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0703 22:56:40.901063 22400 api_server.go:52] waiting for apiserver process to appear ...
I0703 22:56:40.901101 22400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0703 22:56:40.915384 22400 api_server.go:72] duration metric: took 2.48809742s to wait for apiserver process to appear ...
I0703 22:56:40.915397 22400 api_server.go:88] waiting for apiserver healthz status ...
I0703 22:56:40.915413 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:40.915791 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:41.416474 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:41.417020 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:41.915614 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:41.916144 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:42.415740 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:42.416228 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:42.915810 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:42.916339 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:43.415926 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:43.416536 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:43.915865 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:43.916365 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:44.415893 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:44.416346 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:44.915945 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:44.916469 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:45.416109 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:45.416667 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:45.916185 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:45.916736 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:46.416376 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:46.416868 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:46.916492 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:46.917036 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:47.415631 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:47.416181 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:47.915858 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:47.916421 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:48.416007 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:48.416518 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:48.915612 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:48.916268 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:49.415788 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:49.416349 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:49.915877 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:49.916378 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:50.415929 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:50.416416 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:50.916113 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:50.916630 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:51.416252 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:51.416782 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:51.916443 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:51.916990 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:52.415553 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:52.416076 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:52.915592 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:52.916133 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:53.415661 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:53.416196 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:53.915789 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:53.916347 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:54.415882 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:54.416373 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:54.915910 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:54.916411 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:55.416013 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:55.416534 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:55.916169 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:55.916675 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:56.416323 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:56.416822 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:56.916385 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:56.916948 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:57.415563 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:57.416084 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:57.915883 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:57.916426 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:58.416009 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:58.416571 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:58.916205 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:58.916759 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:59.416369 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:59.416850 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:56:59.916478 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:56:59.917019 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:00.415567 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:00.416122 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:00.915512 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:00.916038 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:01.415649 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:01.416155 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:01.915752 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:01.916334 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:02.415911 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:02.416449 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:02.916112 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:02.916631 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:03.416271 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:03.416783 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:03.916501 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:03.917016 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:04.415546 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:04.416015 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:04.915562 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:04.916143 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:05.415743 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:05.416329 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:05.915988 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:05.916506 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:06.416127 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:06.416665 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:06.916307 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:06.916822 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:07.416423 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:07.416949 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:07.915836 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:07.916364 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:08.415959 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:08.416566 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:08.916215 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:08.916736 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:09.416345 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:09.416806 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:09.916488 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:09.917089 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:10.415649 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:10.416181 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:10.916048 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:10.916599 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:11.416237 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:11.416772 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:11.916487 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:11.917045 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:12.415610 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:12.416203 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:12.915748 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:12.916285 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:13.415815 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:13.416416 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:13.915963 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:13.916560 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:14.416149 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:14.416654 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:14.916241 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:14.916800 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:15.416397 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:15.416956 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:15.915531 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:15.916055 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:16.415604 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:16.416168 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:16.915864 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:16.916407 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:17.415951 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:17.416483 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:17.916165 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:17.916769 22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
I0703 22:57:18.416382 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:19.968917 22400 api_server.go:279] https://192.168.39.219:8441/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0703 22:57:19.968935 22400 api_server.go:103] status: https://192.168.39.219:8441/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0703 22:57:19.968946 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:20.059918 22400 api_server.go:279] https://192.168.39.219:8441/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0703 22:57:20.059939 22400 api_server.go:103] status: https://192.168.39.219:8441/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0703 22:57:20.416388 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:20.420821 22400 api_server.go:279] https://192.168.39.219:8441/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0703 22:57:20.420838 22400 api_server.go:103] status: https://192.168.39.219:8441/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0703 22:57:20.915476 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:20.920198 22400 api_server.go:279] https://192.168.39.219:8441/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0703 22:57:20.920215 22400 api_server.go:103] status: https://192.168.39.219:8441/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0703 22:57:21.415768 22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
I0703 22:57:21.420174 22400 api_server.go:279] https://192.168.39.219:8441/healthz returned 200:
ok
I0703 22:57:21.426044 22400 api_server.go:141] control plane version: v1.30.2
I0703 22:57:21.426059 22400 api_server.go:131] duration metric: took 40.510656123s to wait for apiserver health ...
I0703 22:57:21.426066 22400 system_pods.go:43] waiting for kube-system pods to appear ...
I0703 22:57:21.433352 22400 system_pods.go:59] 7 kube-system pods found
I0703 22:57:21.433363 22400 system_pods.go:61] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
I0703 22:57:21.433368 22400 system_pods.go:61] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
I0703 22:57:21.433373 22400 system_pods.go:61] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
I0703 22:57:21.433380 22400 system_pods.go:61] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
I0703 22:57:21.433384 22400 system_pods.go:61] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
I0703 22:57:21.433388 22400 system_pods.go:61] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
I0703 22:57:21.433392 22400 system_pods.go:61] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
I0703 22:57:21.433397 22400 system_pods.go:74] duration metric: took 7.325714ms to wait for pod list to return data ...
I0703 22:57:21.433403 22400 default_sa.go:34] waiting for default service account to be created ...
I0703 22:57:21.435279 22400 default_sa.go:45] found service account: "default"
I0703 22:57:21.435286 22400 default_sa.go:55] duration metric: took 1.87962ms for default service account to be created ...
I0703 22:57:21.435292 22400 system_pods.go:116] waiting for k8s-apps to be running ...
I0703 22:57:21.439968 22400 system_pods.go:86] 7 kube-system pods found
I0703 22:57:21.439976 22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
I0703 22:57:21.439979 22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
I0703 22:57:21.439982 22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
I0703 22:57:21.439986 22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
I0703 22:57:21.439988 22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
I0703 22:57:21.439992 22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
I0703 22:57:21.439995 22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
I0703 22:57:21.440006 22400 retry.go:31] will retry after 205.929915ms: missing components: kube-apiserver
I0703 22:57:21.651686 22400 system_pods.go:86] 7 kube-system pods found
I0703 22:57:21.651699 22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
I0703 22:57:21.651704 22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
I0703 22:57:21.651707 22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
I0703 22:57:21.651710 22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
I0703 22:57:21.651713 22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
I0703 22:57:21.651715 22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
I0703 22:57:21.651718 22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
I0703 22:57:21.651726 22400 retry.go:31] will retry after 309.497055ms: missing components: kube-apiserver
I0703 22:57:21.968114 22400 system_pods.go:86] 7 kube-system pods found
I0703 22:57:21.968128 22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
I0703 22:57:21.968132 22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
I0703 22:57:21.968135 22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
I0703 22:57:21.968138 22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
I0703 22:57:21.968141 22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
I0703 22:57:21.968144 22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
I0703 22:57:21.968147 22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
I0703 22:57:21.968157 22400 retry.go:31] will retry after 343.461998ms: missing components: kube-apiserver
I0703 22:57:22.317769 22400 system_pods.go:86] 7 kube-system pods found
I0703 22:57:22.317781 22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
I0703 22:57:22.317786 22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
I0703 22:57:22.317789 22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
I0703 22:57:22.317792 22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
I0703 22:57:22.317795 22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
I0703 22:57:22.317798 22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
I0703 22:57:22.317801 22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
I0703 22:57:22.317811 22400 retry.go:31] will retry after 522.86021ms: missing components: kube-apiserver
I0703 22:57:22.846907 22400 system_pods.go:86] 7 kube-system pods found
I0703 22:57:22.846919 22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
I0703 22:57:22.846923 22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
I0703 22:57:22.846927 22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
I0703 22:57:22.846929 22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
I0703 22:57:22.846932 22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
I0703 22:57:22.846935 22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
I0703 22:57:22.846937 22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
I0703 22:57:22.846947 22400 retry.go:31] will retry after 479.921307ms: missing components: kube-apiserver
I0703 22:57:23.333906 22400 system_pods.go:86] 7 kube-system pods found
I0703 22:57:23.333918 22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
I0703 22:57:23.333922 22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
I0703 22:57:23.333926 22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
I0703 22:57:23.333928 22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
I0703 22:57:23.333931 22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
I0703 22:57:23.333934 22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
I0703 22:57:23.333937 22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
I0703 22:57:23.333947 22400 retry.go:31] will retry after 736.801996ms: missing components: kube-apiserver
I0703 22:57:24.077214 22400 system_pods.go:86] 7 kube-system pods found
I0703 22:57:24.077229 22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
I0703 22:57:24.077234 22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
I0703 22:57:24.077238 22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
I0703 22:57:24.077241 22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
I0703 22:57:24.077245 22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
I0703 22:57:24.077249 22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
I0703 22:57:24.077253 22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
I0703 22:57:24.077267 22400 retry.go:31] will retry after 964.625599ms: missing components: kube-apiserver
I0703 22:57:25.047908 22400 system_pods.go:86] 7 kube-system pods found
I0703 22:57:25.047927 22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
I0703 22:57:25.047933 22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
I0703 22:57:25.047938 22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
I0703 22:57:25.047943 22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
I0703 22:57:25.047947 22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
I0703 22:57:25.047952 22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
I0703 22:57:25.047957 22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
I0703 22:57:25.047970 22400 retry.go:31] will retry after 1.454354913s: missing components: kube-apiserver
I0703 22:57:26.508057 22400 system_pods.go:86] 7 kube-system pods found
I0703 22:57:26.508069 22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
I0703 22:57:26.508073 22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
I0703 22:57:26.508076 22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
I0703 22:57:26.508079 22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
I0703 22:57:26.508082 22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
I0703 22:57:26.508085 22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
I0703 22:57:26.508087 22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
I0703 22:57:26.508097 22400 retry.go:31] will retry after 1.471134788s: missing components: kube-apiserver
I0703 22:57:27.988943 22400 system_pods.go:86] 7 kube-system pods found
I0703 22:57:27.988957 22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
I0703 22:57:27.988960 22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
I0703 22:57:27.988964 22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
I0703 22:57:27.988967 22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
I0703 22:57:27.988969 22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
I0703 22:57:27.988972 22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
I0703 22:57:27.988975 22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
I0703 22:57:27.988984 22400 retry.go:31] will retry after 1.778603948s: missing components: kube-apiserver
I0703 22:57:29.772319 22400 system_pods.go:86] 7 kube-system pods found
I0703 22:57:29.772331 22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
I0703 22:57:29.772335 22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
I0703 22:57:29.772338 22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
I0703 22:57:29.772341 22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
I0703 22:57:29.772344 22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
I0703 22:57:29.772347 22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
I0703 22:57:29.772350 22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
I0703 22:57:29.772360 22400 retry.go:31] will retry after 2.398104912s: missing components: kube-apiserver
I0703 22:57:32.176857 22400 system_pods.go:86] 7 kube-system pods found
I0703 22:57:32.176870 22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
I0703 22:57:32.176874 22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
I0703 22:57:32.176877 22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
I0703 22:57:32.176880 22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
I0703 22:57:32.176883 22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
I0703 22:57:32.176886 22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
I0703 22:57:32.176889 22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
I0703 22:57:32.176899 22400 retry.go:31] will retry after 2.300708214s: missing components: kube-apiserver
I0703 22:57:34.483251 22400 system_pods.go:86] 7 kube-system pods found
I0703 22:57:34.483263 22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
I0703 22:57:34.483267 22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
I0703 22:57:34.483271 22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
I0703 22:57:34.483274 22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
I0703 22:57:34.483277 22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
I0703 22:57:34.483280 22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
I0703 22:57:34.483283 22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
I0703 22:57:34.483293 22400 retry.go:31] will retry after 2.770844413s: missing components: kube-apiserver
I0703 22:57:37.261017 22400 system_pods.go:86] 7 kube-system pods found
I0703 22:57:37.261028 22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
I0703 22:57:37.261032 22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
I0703 22:57:37.261035 22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
I0703 22:57:37.261038 22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
I0703 22:57:37.261040 22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
I0703 22:57:37.261043 22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
I0703 22:57:37.261046 22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
I0703 22:57:37.261055 22400 retry.go:31] will retry after 5.182347531s: missing components: kube-apiserver
I0703 22:57:42.452925 22400 system_pods.go:86] 7 kube-system pods found
I0703 22:57:42.452938 22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
I0703 22:57:42.452941 22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
I0703 22:57:42.452945 22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
I0703 22:57:42.452947 22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
I0703 22:57:42.452950 22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
I0703 22:57:42.452953 22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
I0703 22:57:42.452956 22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
I0703 22:57:42.452966 22400 retry.go:31] will retry after 6.155487281s: missing components: kube-apiserver
I0703 22:57:48.614848 22400 system_pods.go:86] 7 kube-system pods found
I0703 22:57:48.614863 22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
I0703 22:57:48.614867 22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
I0703 22:57:48.614870 22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
I0703 22:57:48.614872 22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
I0703 22:57:48.614875 22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
I0703 22:57:48.614878 22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
I0703 22:57:48.614881 22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
I0703 22:57:48.614893 22400 retry.go:31] will retry after 7.232822524s: missing components: kube-apiserver
I0703 22:57:55.853881 22400 system_pods.go:86] 7 kube-system pods found
I0703 22:57:55.853898 22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
I0703 22:57:55.853904 22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
I0703 22:57:55.853915 22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0703 22:57:55.853922 22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
I0703 22:57:55.853928 22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
I0703 22:57:55.853933 22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
I0703 22:57:55.853938 22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
I0703 22:57:55.853945 22400 system_pods.go:126] duration metric: took 34.418648577s to wait for k8s-apps to be running ...
I0703 22:57:55.853951 22400 system_svc.go:44] waiting for kubelet service to be running ....
I0703 22:57:55.854000 22400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0703 22:57:55.871280 22400 system_svc.go:56] duration metric: took 17.314505ms WaitForService to wait for kubelet
I0703 22:57:55.871293 22400 kubeadm.go:576] duration metric: took 1m17.444010838s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0703 22:57:55.871310 22400 node_conditions.go:102] verifying NodePressure condition ...
I0703 22:57:55.874516 22400 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0703 22:57:55.874526 22400 node_conditions.go:123] node cpu capacity is 2
I0703 22:57:55.874535 22400 node_conditions.go:105] duration metric: took 3.22147ms to run NodePressure ...
I0703 22:57:55.874544 22400 start.go:240] waiting for startup goroutines ...
I0703 22:57:55.874549 22400 start.go:245] waiting for cluster config update ...
I0703 22:57:55.874558 22400 start.go:254] writing updated cluster config ...
I0703 22:57:55.874806 22400 ssh_runner.go:195] Run: rm -f paused
I0703 22:57:55.926722 22400 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
I0703 22:57:55.928522 22400 out.go:177] * Done! kubectl is now configured to use "functional-377836" cluster and "default" namespace by default
==> Docker <==
Jul 03 22:56:25 functional-377836 dockerd[6155]: time="2024-07-03T22:56:25.641548683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 03 22:56:25 functional-377836 dockerd[6155]: time="2024-07-03T22:56:25.643510933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 03 22:56:25 functional-377836 cri-dockerd[6427]: time="2024-07-03T22:56:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fa142dde95510fec3b78bc7ea9b968256055dcc2f08d0a7f358e091bd954c5ee/resolv.conf as [nameserver 192.168.122.1]"
Jul 03 22:56:25 functional-377836 dockerd[6155]: time="2024-07-03T22:56:25.938106048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 03 22:56:25 functional-377836 dockerd[6155]: time="2024-07-03T22:56:25.938240838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 03 22:56:25 functional-377836 dockerd[6155]: time="2024-07-03T22:56:25.938321323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 03 22:56:25 functional-377836 dockerd[6155]: time="2024-07-03T22:56:25.938559611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 03 22:57:09 functional-377836 dockerd[6149]: time="2024-07-03T22:57:09.845564280Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=129792b10c13bc54890513a3774d03385ca7b10f4078c055ca2fd389dabfb25e
Jul 03 22:57:09 functional-377836 dockerd[6149]: time="2024-07-03T22:57:09.895345942Z" level=info msg="ignoring event" container=129792b10c13bc54890513a3774d03385ca7b10f4078c055ca2fd389dabfb25e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 03 22:57:09 functional-377836 dockerd[6155]: time="2024-07-03T22:57:09.895771635Z" level=info msg="shim disconnected" id=129792b10c13bc54890513a3774d03385ca7b10f4078c055ca2fd389dabfb25e namespace=moby
Jul 03 22:57:09 functional-377836 dockerd[6155]: time="2024-07-03T22:57:09.895915061Z" level=warning msg="cleaning up after shim disconnected" id=129792b10c13bc54890513a3774d03385ca7b10f4078c055ca2fd389dabfb25e namespace=moby
Jul 03 22:57:09 functional-377836 dockerd[6155]: time="2024-07-03T22:57:09.895928991Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 03 22:57:09 functional-377836 dockerd[6149]: time="2024-07-03T22:57:09.969502999Z" level=info msg="ignoring event" container=6e039bed70198a674f9d1014dcf4c4bd6c1474aa1ac8229f4ff884e074fecfe2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 03 22:57:09 functional-377836 dockerd[6155]: time="2024-07-03T22:57:09.969745518Z" level=info msg="shim disconnected" id=6e039bed70198a674f9d1014dcf4c4bd6c1474aa1ac8229f4ff884e074fecfe2 namespace=moby
Jul 03 22:57:09 functional-377836 dockerd[6155]: time="2024-07-03T22:57:09.970815736Z" level=warning msg="cleaning up after shim disconnected" id=6e039bed70198a674f9d1014dcf4c4bd6c1474aa1ac8229f4ff884e074fecfe2 namespace=moby
Jul 03 22:57:09 functional-377836 dockerd[6155]: time="2024-07-03T22:57:09.970846752Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 03 22:57:17 functional-377836 dockerd[6155]: time="2024-07-03T22:57:17.944805774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 03 22:57:17 functional-377836 dockerd[6155]: time="2024-07-03T22:57:17.945014802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 03 22:57:17 functional-377836 dockerd[6155]: time="2024-07-03T22:57:17.945027320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 03 22:57:17 functional-377836 dockerd[6155]: time="2024-07-03T22:57:17.945234344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 03 22:57:18 functional-377836 cri-dockerd[6427]: time="2024-07-03T22:57:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4b240ad69522b33b3e25233635b92d01c1a3b328290d69f6171bddc5924a8344/resolv.conf as [nameserver 192.168.122.1]"
Jul 03 22:57:18 functional-377836 dockerd[6155]: time="2024-07-03T22:57:18.104833953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 03 22:57:18 functional-377836 dockerd[6155]: time="2024-07-03T22:57:18.105215502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 03 22:57:18 functional-377836 dockerd[6155]: time="2024-07-03T22:57:18.105238330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 03 22:57:18 functional-377836 dockerd[6155]: time="2024-07-03T22:57:18.105427227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
0790dd5ddc5ea 56ce0fd9fb532 38 seconds ago Running kube-apiserver 0 4b240ad69522b kube-apiserver-functional-377836
3640a066b0712 cbb01a7bd410d About a minute ago Running coredns 3 fa142dde95510 coredns-7db6d8ff4d-4w94w
ee9b7d68186f2 53c535741fb44 About a minute ago Running kube-proxy 3 b4b24202e8c4a kube-proxy-pgfqk
7917e365b1481 6e38f40d628db About a minute ago Running storage-provisioner 4 e7aa3982a9066 storage-provisioner
f2cde61576666 7820c83aa1394 About a minute ago Running kube-scheduler 3 2ce690ad303e2 kube-scheduler-functional-377836
8991ec818d243 3861cfcd7c04c About a minute ago Running etcd 3 843b20a3a87d9 etcd-functional-377836
2be910c8e295a e874818b3caac About a minute ago Running kube-controller-manager 3 b7387557faedf kube-controller-manager-functional-377836
6abeb2402f6db cbb01a7bd410d About a minute ago Created coredns 2 b406ab73e5d2f coredns-7db6d8ff4d-4w94w
f9863ca2c40f6 e874818b3caac About a minute ago Created kube-controller-manager 2 fb0a93e6301f9 kube-controller-manager-functional-377836
08c3c84948f0a 53c535741fb44 About a minute ago Created kube-proxy 2 d73ef8b96e2cb kube-proxy-pgfqk
71c0b16f3679a 3861cfcd7c04c About a minute ago Created etcd 2 d19de58b969dd etcd-functional-377836
4aa40d2e115b4 7820c83aa1394 About a minute ago Created kube-scheduler 2 757aac77b2425 kube-scheduler-functional-377836
3720e138f218e 6e38f40d628db About a minute ago Exited storage-provisioner 3 3180d83316a48 storage-provisioner
==> coredns [3640a066b071] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:45296 - 44991 "HINFO IN 8945907258705290674.6296398039332437337. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.04696491s
==> coredns [6abeb2402f6d] <==
==> describe nodes <==
Name: functional-377836
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-377836
kubernetes.io/os=linux
minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
minikube.k8s.io/name=functional-377836
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_07_03T22_54_20_0700
minikube.k8s.io/version=v1.33.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 03 Jul 2024 22:54:17 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-377836
AcquireTime: <unset>
RenewTime: Wed, 03 Jul 2024 22:57:55 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 03 Jul 2024 22:57:25 +0000 Wed, 03 Jul 2024 22:57:25 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 03 Jul 2024 22:57:25 +0000 Wed, 03 Jul 2024 22:57:25 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 03 Jul 2024 22:57:25 +0000 Wed, 03 Jul 2024 22:57:25 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 03 Jul 2024 22:57:25 +0000 Wed, 03 Jul 2024 22:57:25 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.219
Hostname: functional-377836
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
System Info:
Machine ID: d0f757d88bb549828598c5bc7b79d26e
System UUID: d0f757d8-8bb5-4982-8598-c5bc7b79d26e
Boot ID: fae7a8c5-c2c5-45e8-b79d-40bf2a5ee916
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.0.3
Kubelet Version: v1.30.2
Kube-Proxy Version: v1.30.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-7db6d8ff4d-4w94w 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (4%!)(MISSING) 3m23s
kube-system etcd-functional-377836 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (2%!)(MISSING) 0 (0%!)(MISSING) 3m37s
kube-system kube-apiserver-functional-377836 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 36s
kube-system kube-controller-manager-functional-377836 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m37s
kube-system kube-proxy-pgfqk 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m23s
kube-system kube-scheduler-functional-377836 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m37s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m22s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (4%!)(MISSING) 170Mi (4%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 90s kube-proxy
Normal Starting 2m16s kube-proxy
Normal Starting 3m21s kube-proxy
Normal NodeAllocatableEnforced 3m43s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 3m43s (x8 over 3m43s) kubelet Node functional-377836 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m43s (x8 over 3m43s) kubelet Node functional-377836 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m43s (x7 over 3m43s) kubelet Node functional-377836 status is now: NodeHasSufficientPID
Normal NodeHasSufficientPID 3m37s kubelet Node functional-377836 status is now: NodeHasSufficientPID
Normal Starting 3m37s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 3m37s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 3m37s kubelet Node functional-377836 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m37s kubelet Node functional-377836 status is now: NodeHasNoDiskPressure
Normal NodeReady 3m35s kubelet Node functional-377836 status is now: NodeReady
Normal RegisteredNode 3m24s node-controller Node functional-377836 event: Registered Node functional-377836 in Controller
Normal NodeHasSufficientMemory 2m22s (x8 over 2m22s) kubelet Node functional-377836 status is now: NodeHasSufficientMemory
Normal Starting 2m22s kubelet Starting kubelet.
Normal NodeHasNoDiskPressure 2m22s (x8 over 2m22s) kubelet Node functional-377836 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m22s (x7 over 2m22s) kubelet Node functional-377836 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m22s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 2m5s node-controller Node functional-377836 event: Registered Node functional-377836 in Controller
Normal Starting 97s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 97s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 96s (x8 over 97s) kubelet Node functional-377836 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 96s (x8 over 97s) kubelet Node functional-377836 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 96s (x7 over 97s) kubelet Node functional-377836 status is now: NodeHasSufficientPID
Normal RegisteredNode 80s node-controller Node functional-377836 event: Registered Node functional-377836 in Controller
Normal NodeNotReady 35s node-controller Node functional-377836 status is now: NodeNotReady
==> dmesg <==
[ +0.164742] systemd-fstab-generator[4014]: Ignoring "noauto" option for root device
[ +0.457412] systemd-fstab-generator[4179]: Ignoring "noauto" option for root device
[ +2.012633] systemd-fstab-generator[4301]: Ignoring "noauto" option for root device
[ +0.063428] kauditd_printk_skb: 137 callbacks suppressed
[ +5.477782] kauditd_printk_skb: 52 callbacks suppressed
[ +11.712346] kauditd_printk_skb: 32 callbacks suppressed
[ +1.066440] systemd-fstab-generator[5212]: Ignoring "noauto" option for root device
[ +5.075303] kauditd_printk_skb: 14 callbacks suppressed
[Jul 3 22:56] systemd-fstab-generator[5678]: Ignoring "noauto" option for root device
[ +0.326346] systemd-fstab-generator[5712]: Ignoring "noauto" option for root device
[ +0.170294] systemd-fstab-generator[5724]: Ignoring "noauto" option for root device
[ +0.162508] systemd-fstab-generator[5738]: Ignoring "noauto" option for root device
[ +5.206359] kauditd_printk_skb: 89 callbacks suppressed
[ +7.689430] systemd-fstab-generator[6375]: Ignoring "noauto" option for root device
[ +0.130438] systemd-fstab-generator[6387]: Ignoring "noauto" option for root device
[ +0.131761] systemd-fstab-generator[6399]: Ignoring "noauto" option for root device
[ +0.149993] systemd-fstab-generator[6414]: Ignoring "noauto" option for root device
[ +0.464102] systemd-fstab-generator[6583]: Ignoring "noauto" option for root device
[ +1.509332] kauditd_printk_skb: 185 callbacks suppressed
[ +1.464439] systemd-fstab-generator[7430]: Ignoring "noauto" option for root device
[ +5.534761] kauditd_printk_skb: 61 callbacks suppressed
[ +11.664602] kauditd_printk_skb: 26 callbacks suppressed
[ +1.641291] systemd-fstab-generator[8463]: Ignoring "noauto" option for root device
[Jul 3 22:57] kauditd_printk_skb: 16 callbacks suppressed
[ +35.272072] kauditd_printk_skb: 2 callbacks suppressed
==> etcd [71c0b16f3679] <==
==> etcd [8991ec818d24] <==
{"level":"info","ts":"2024-07-03T22:56:21.517942Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2024-07-03T22:56:21.518068Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2024-07-03T22:56:21.518435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 switched to configuration voters=(2930583753691095924)"}
{"level":"info","ts":"2024-07-03T22:56:21.520954Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"14fc06d09ccfd789","local-member-id":"28ab8665a749e374","added-peer-id":"28ab8665a749e374","added-peer-peer-urls":["https://192.168.39.219:2380"]}
{"level":"info","ts":"2024-07-03T22:56:21.521221Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"14fc06d09ccfd789","local-member-id":"28ab8665a749e374","cluster-version":"3.5"}
{"level":"info","ts":"2024-07-03T22:56:21.523938Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-07-03T22:56:21.522057Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2024-07-03T22:56:21.524683Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"28ab8665a749e374","initial-advertise-peer-urls":["https://192.168.39.219:2380"],"listen-peer-urls":["https://192.168.39.219:2380"],"advertise-client-urls":["https://192.168.39.219:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.219:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2024-07-03T22:56:21.524725Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-07-03T22:56:21.522081Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.219:2380"}
{"level":"info","ts":"2024-07-03T22:56:21.525073Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.219:2380"}
{"level":"info","ts":"2024-07-03T22:56:22.961398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 is starting a new election at term 3"}
{"level":"info","ts":"2024-07-03T22:56:22.961523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 became pre-candidate at term 3"}
{"level":"info","ts":"2024-07-03T22:56:22.961643Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 received MsgPreVoteResp from 28ab8665a749e374 at term 3"}
{"level":"info","ts":"2024-07-03T22:56:22.961722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 became candidate at term 4"}
{"level":"info","ts":"2024-07-03T22:56:22.961742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 received MsgVoteResp from 28ab8665a749e374 at term 4"}
{"level":"info","ts":"2024-07-03T22:56:22.961804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 became leader at term 4"}
{"level":"info","ts":"2024-07-03T22:56:22.961847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 28ab8665a749e374 elected leader 28ab8665a749e374 at term 4"}
{"level":"info","ts":"2024-07-03T22:56:22.967489Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"28ab8665a749e374","local-member-attributes":"{Name:functional-377836 ClientURLs:[https://192.168.39.219:2379]}","request-path":"/0/members/28ab8665a749e374/attributes","cluster-id":"14fc06d09ccfd789","publish-timeout":"7s"}
{"level":"info","ts":"2024-07-03T22:56:22.967501Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-07-03T22:56:22.967523Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-07-03T22:56:22.967784Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-07-03T22:56:22.968473Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-07-03T22:56:22.970485Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.219:2379"}
{"level":"info","ts":"2024-07-03T22:56:22.970547Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
==> kernel <==
22:57:56 up 4 min, 0 users, load average: 1.12, 0.79, 0.34
Linux functional-377836 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [0790dd5ddc5e] <==
I0703 22:57:19.992224 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0703 22:57:19.992276 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0703 22:57:19.992346 1 crd_finalizer.go:266] Starting CRDFinalizer
I0703 22:57:19.992678 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0703 22:57:19.992981 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0703 22:57:20.088174 1 shared_informer.go:320] Caches are synced for configmaps
I0703 22:57:20.089350 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0703 22:57:20.090290 1 apf_controller.go:379] Running API Priority and Fairness config worker
I0703 22:57:20.090403 1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
I0703 22:57:20.091176 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0703 22:57:20.104075 1 shared_informer.go:320] Caches are synced for node_authorizer
I0703 22:57:20.104349 1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
I0703 22:57:20.104473 1 handler_discovery.go:447] Starting ResourceDiscoveryManager
I0703 22:57:20.104740 1 shared_informer.go:320] Caches are synced for crd-autoregister
I0703 22:57:20.105382 1 aggregator.go:165] initial CRD sync complete...
I0703 22:57:20.105413 1 autoregister_controller.go:141] Starting autoregister controller
I0703 22:57:20.105418 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0703 22:57:20.105424 1 cache.go:39] Caches are synced for autoregister controller
I0703 22:57:20.115591 1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
I0703 22:57:20.115906 1 policy_source.go:224] refreshing policies
I0703 22:57:20.167545 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0703 22:57:20.892319 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W0703 22:57:21.134719 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.219]
I0703 22:57:21.136319 1 controller.go:615] quota admission added evaluator for: endpoints
I0703 22:57:21.140457 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
==> kube-controller-manager [2be910c8e295] <==
E0703 22:57:20.016164 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodTemplate: unknown (get podtemplates)
E0703 22:57:20.016204 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
E0703 22:57:20.016226 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ResourceQuota: unknown (get resourcequotas)
E0703 22:57:20.016241 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
E0703 22:57:20.016256 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.IngressClass: unknown (get ingressclasses.networking.k8s.io)
E0703 22:57:20.016293 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RoleBinding: unknown (get rolebindings.rbac.authorization.k8s.io)
E0703 22:57:20.016332 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v2.HorizontalPodAutoscaler: unknown (get horizontalpodautoscalers.autoscaling)
E0703 22:57:20.016352 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
E0703 22:57:20.016367 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ControllerRevision: unknown (get controllerrevisions.apps)
E0703 22:57:20.016380 1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: unknown
E0703 22:57:20.016391 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
E0703 22:57:20.035317 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CronJob: unknown (get cronjobs.batch)
E0703 22:57:20.035656 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Ingress: unknown (get ingresses.networking.k8s.io)
E0703 22:57:20.037202 1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: unknown
I0703 22:57:21.644036 1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
I0703 22:57:21.661961 1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-apiserver-functional-377836" err="Operation cannot be fulfilled on pods \"kube-apiserver-functional-377836\": StorageError: invalid object, Code: 4, Key: /registry/pods/kube-system/kube-apiserver-functional-377836, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 80bc54ed-3e0b-40c2-9e36-5889e4c30b1d, UID in object meta: 22b983f4-c7b8-492c-bac2-90d4b68c0da4"
E0703 22:57:21.723835 1 node_lifecycle_controller.go:753] unable to mark all pods NotReady on node functional-377836: Operation cannot be fulfilled on pods "kube-apiserver-functional-377836": StorageError: invalid object, Code: 4, Key: /registry/pods/kube-system/kube-apiserver-functional-377836, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 80bc54ed-3e0b-40c2-9e36-5889e4c30b1d, UID in object meta: 22b983f4-c7b8-492c-bac2-90d4b68c0da4; queuing for retry
I0703 22:57:21.724443 1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
E0703 22:57:26.729941 1 node_lifecycle_controller.go:973] "Error updating node" err="Operation cannot be fulfilled on nodes \"functional-377836\": the object has been modified; please apply your changes to the latest version and try again" logger="node-lifecycle-controller" node="functional-377836"
I0703 22:57:26.752719 1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
I0703 22:57:31.637442 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="169.575µs"
I0703 22:57:45.183469 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.768982ms"
I0703 22:57:45.184067 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="205.96µs"
I0703 22:57:49.999979 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="12.278403ms"
I0703 22:57:50.000063 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.551µs"
==> kube-controller-manager [f9863ca2c40f] <==
==> kube-proxy [08c3c84948f0] <==
==> kube-proxy [ee9b7d68186f] <==
I0703 22:56:25.783083 1 server_linux.go:69] "Using iptables proxy"
I0703 22:56:25.816323 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.219"]
I0703 22:56:25.867693 1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
I0703 22:56:25.867754 1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0703 22:56:25.867828 1 server_linux.go:165] "Using iptables Proxier"
I0703 22:56:25.871028 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0703 22:56:25.871524 1 server.go:872] "Version info" version="v1.30.2"
I0703 22:56:25.871839 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0703 22:56:25.873442 1 config.go:192] "Starting service config controller"
I0703 22:56:25.875815 1 shared_informer.go:313] Waiting for caches to sync for service config
I0703 22:56:25.873606 1 config.go:101] "Starting endpoint slice config controller"
I0703 22:56:25.875852 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0703 22:56:25.874138 1 config.go:319] "Starting node config controller"
I0703 22:56:25.875916 1 shared_informer.go:313] Waiting for caches to sync for node config
I0703 22:56:25.976460 1 shared_informer.go:320] Caches are synced for node config
I0703 22:56:25.976509 1 shared_informer.go:320] Caches are synced for service config
I0703 22:56:25.976561 1 shared_informer.go:320] Caches are synced for endpoint slice config
==> kube-scheduler [4aa40d2e115b] <==
==> kube-scheduler [f2cde6157666] <==
I0703 22:56:22.295321 1 serving.go:380] Generated self-signed cert in-memory
W0703 22:56:24.184821 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0703 22:56:24.185128 1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0703 22:56:24.185332 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
W0703 22:56:24.185454 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0703 22:56:24.281556 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
I0703 22:56:24.281807 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0703 22:56:24.283780 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I0703 22:56:24.284140 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0703 22:56:24.287951 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0703 22:56:24.284158 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0703 22:56:24.388678 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0703 22:57:19.981106 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
E0703 22:57:19.983405 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
E0703 22:57:19.983693 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
==> kubelet <==
Jul 03 22:57:10 functional-377836 kubelet[7437]: I0703 22:57:10.268552 7437 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"59d9adb16464a253be5b23867f9bc024882c1b7d23cd5a7f0476a54e5cfb47c5"} err="failed to get container status \"59d9adb16464a253be5b23867f9bc024882c1b7d23cd5a7f0476a54e5cfb47c5\": rpc error: code = Unknown desc = Error response from daemon: No such container: 59d9adb16464a253be5b23867f9bc024882c1b7d23cd5a7f0476a54e5cfb47c5"
Jul 03 22:57:11 functional-377836 kubelet[7437]: E0703 22:57:11.198062 7437 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-377836?timeout=10s\": dial tcp 192.168.39.219:8441: connect: connection refused" interval="7s"
Jul 03 22:57:11 functional-377836 kubelet[7437]: I0703 22:57:11.853091 7437 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e880399f9148ced2c133b53d7537abc" path="/var/lib/kubelet/pods/1e880399f9148ced2c133b53d7537abc/volumes"
Jul 03 22:57:15 functional-377836 kubelet[7437]: E0703 22:57:15.300475 7437 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-377836\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-377836?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.219:8441: connect: connection refused"
Jul 03 22:57:15 functional-377836 kubelet[7437]: E0703 22:57:15.301674 7437 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-377836\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-377836?timeout=10s\": dial tcp 192.168.39.219:8441: connect: connection refused"
Jul 03 22:57:15 functional-377836 kubelet[7437]: E0703 22:57:15.302277 7437 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-377836\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-377836?timeout=10s\": dial tcp 192.168.39.219:8441: connect: connection refused"
Jul 03 22:57:15 functional-377836 kubelet[7437]: E0703 22:57:15.302805 7437 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-377836\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-377836?timeout=10s\": dial tcp 192.168.39.219:8441: connect: connection refused"
Jul 03 22:57:15 functional-377836 kubelet[7437]: E0703 22:57:15.303359 7437 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-377836\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-377836?timeout=10s\": dial tcp 192.168.39.219:8441: connect: connection refused"
Jul 03 22:57:15 functional-377836 kubelet[7437]: E0703 22:57:15.303417 7437 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
Jul 03 22:57:17 functional-377836 kubelet[7437]: I0703 22:57:17.848505 7437 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-377836" podUID="80bc54ed-3e0b-40c2-9e36-5889e4c30b1d"
Jul 03 22:57:17 functional-377836 kubelet[7437]: E0703 22:57:17.849800 7437 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-377836\": dial tcp 192.168.39.219:8441: connect: connection refused" pod="kube-system/kube-apiserver-functional-377836"
Jul 03 22:57:18 functional-377836 kubelet[7437]: E0703 22:57:18.023390 7437 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.219:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-377836.17ded6081c17d2d2 kube-system 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-377836,UID:af96a50731406e4b1662571b5822a697,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.30.2\" already present on machine,Source:EventSource{Component:kubelet,Host:functional-377836,},FirstTimestamp:2024-07-03 22:57:18.021513938 +0000 UTC m=+58.288217776,LastTimestamp:2024-07-03 22:57:18.021513938 +0000 UTC m=+58.288217776,Count:1,Type:Normal,EventTime:0001
-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-377836,}"
Jul 03 22:57:18 functional-377836 kubelet[7437]: E0703 22:57:18.199716 7437 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-377836?timeout=10s\": dial tcp 192.168.39.219:8441: connect: connection refused" interval="7s"
Jul 03 22:57:18 functional-377836 kubelet[7437]: I0703 22:57:18.311568 7437 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-377836" podUID="80bc54ed-3e0b-40c2-9e36-5889e4c30b1d"
Jul 03 22:57:19 functional-377836 kubelet[7437]: E0703 22:57:19.880047 7437 iptables.go:577] "Could not set up iptables canary" err=<
Jul 03 22:57:19 functional-377836 kubelet[7437]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
Jul 03 22:57:19 functional-377836 kubelet[7437]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Jul 03 22:57:19 functional-377836 kubelet[7437]: Perhaps ip6tables or your kernel needs to be upgraded.
Jul 03 22:57:19 functional-377836 kubelet[7437]: > table="nat" chain="KUBE-KUBELET-CANARY"
Jul 03 22:57:19 functional-377836 kubelet[7437]: E0703 22:57:19.980647 7437 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
Jul 03 22:57:20 functional-377836 kubelet[7437]: I0703 22:57:20.158957 7437 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-377836"
Jul 03 22:57:20 functional-377836 kubelet[7437]: I0703 22:57:20.184405 7437 status_manager.go:877] "Failed to update status for pod" pod="kube-system/kube-apiserver-functional-377836" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80bc54ed-3e0b-40c2-9e36-5889e4c30b1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2024-07-03T22:57:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2024-07-03T22:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2024-07-03T22:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver]
\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"docker://0790dd5ddc5ea977a68ed1752c2402bd2edd431104d0d2889326b8b61e057862\\\",\\\"image\\\":\\\"registry.k8s.io/kube-apiserver:v1.30.2\\\",\\\"imageID\\\":\\\"docker-pullable://registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2024-07-03T22:57:18Z\\\"}}}]}}\" for pod \"kube-system\"/\"kube-apiserver-functional-377836\": Pod \"kube-apiserver-functional-377836\" is invalid: metadata.uid: Invalid value: \"80bc54ed-3e0b-40c2-9e36-5889e4c30b1d\": field is immutable"
Jul 03 22:57:20 functional-377836 kubelet[7437]: I0703 22:57:20.327504 7437 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-377836" podUID="80bc54ed-3e0b-40c2-9e36-5889e4c30b1d"
Jul 03 22:57:27 functional-377836 kubelet[7437]: I0703 22:57:27.853490 7437 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-377836" podUID="80bc54ed-3e0b-40c2-9e36-5889e4c30b1d"
Jul 03 22:57:49 functional-377836 kubelet[7437]: I0703 22:57:49.886832 7437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-377836" podStartSLOduration=29.886812599 podStartE2EDuration="29.886812599s" podCreationTimestamp="2024-07-03 22:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-03 22:57:48.692934581 +0000 UTC m=+88.959638428" watchObservedRunningTime="2024-07-03 22:57:49.886812599 +0000 UTC m=+90.153516442"
==> storage-provisioner [3720e138f218] <==
I0703 22:55:57.286018 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0703 22:55:57.298345 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0703 22:55:57.300074 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
==> storage-provisioner [7917e365b148] <==
I0703 22:56:25.485591 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0703 22:56:25.521715 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0703 22:56:25.521939 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
E0703 22:56:39.900004 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0703 22:56:42.919827 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0703 22:56:46.569448 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0703 22:56:48.728324 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0703 22:56:51.105471 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0703 22:56:53.338960 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0703 22:56:56.061793 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0703 22:56:59.299057 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0703 22:57:03.253191 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0703 22:57:05.768332 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0703 22:57:08.682577 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0703 22:57:11.447330 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0703 22:57:14.574234 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0703 22:57:17.254538 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0703 22:57:19.959223 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
I0703 22:57:23.604352 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0703 22:57:23.604823 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad933137-5c62-417f-8f1f-2e28493beebc", APIVersion:"v1", ResourceVersion:"688", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-377836_4acc0783-8e29-4d43-b1fc-96eb83434b04 became leader
I0703 22:57:23.604973 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-377836_4acc0783-8e29-4d43-b1fc-96eb83434b04!
I0703 22:57:23.705324 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-377836_4acc0783-8e29-4d43-b1fc-96eb83434b04!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-377836 -n functional-377836
helpers_test.go:261: (dbg) Run: kubectl --context functional-377836 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/ComponentHealth FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ComponentHealth (1.58s)