=== RUN TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run: out/minikube-linux-amd64 start -p pause-388106 --alsologtostderr -v=1 --driver=kvm2
pause_test.go:92: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p pause-388106 --alsologtostderr -v=1 --driver=kvm2 : exit status 63 (6.107931621s)
-- stdout --
* [pause-388106] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=19264
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/19264-3824/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3824/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on existing profile
-- /stdout --
** stderr **
I0717 01:14:02.387640 46310 out.go:291] Setting OutFile to fd 1 ...
I0717 01:14:02.387874 46310 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 01:14:02.387902 46310 out.go:304] Setting ErrFile to fd 2...
I0717 01:14:02.387918 46310 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 01:14:02.388381 46310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3824/.minikube/bin
I0717 01:14:02.388980 46310 out.go:298] Setting JSON to false
I0717 01:14:02.390058 46310 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3390,"bootTime":1721175452,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0717 01:14:02.390121 46310 start.go:139] virtualization: kvm guest
I0717 01:14:02.392365 46310 out.go:177] * [pause-388106] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
I0717 01:14:02.393763 46310 out.go:177] - MINIKUBE_LOCATION=19264
I0717 01:14:02.393758 46310 notify.go:220] Checking for updates...
I0717 01:14:02.396184 46310 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0717 01:14:02.397534 46310 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19264-3824/kubeconfig
I0717 01:14:02.398846 46310 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3824/.minikube
I0717 01:14:02.400169 46310 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0717 01:14:02.401417 46310 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0717 01:14:02.403057 46310 config.go:182] Loaded profile config "pause-388106": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 01:14:02.403496 46310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0717 01:14:02.403557 46310 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 01:14:02.419030 46310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37123
I0717 01:14:02.419479 46310 main.go:141] libmachine: () Calling .GetVersion
I0717 01:14:02.420215 46310 main.go:141] libmachine: Using API Version 1
I0717 01:14:02.420238 46310 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 01:14:02.420599 46310 main.go:141] libmachine: () Calling .GetMachineName
I0717 01:14:02.420758 46310 main.go:141] libmachine: (pause-388106) Calling .DriverName
I0717 01:14:02.421013 46310 driver.go:392] Setting default libvirt URI to qemu:///system
I0717 01:14:02.421296 46310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0717 01:14:02.421326 46310 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 01:14:02.435956 46310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33931
I0717 01:14:02.436414 46310 main.go:141] libmachine: () Calling .GetVersion
I0717 01:14:02.436886 46310 main.go:141] libmachine: Using API Version 1
I0717 01:14:02.436908 46310 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 01:14:02.437202 46310 main.go:141] libmachine: () Calling .GetMachineName
I0717 01:14:02.437557 46310 main.go:141] libmachine: (pause-388106) Calling .DriverName
I0717 01:14:08.441388 46310 out.go:177] * Using the kvm2 driver based on existing profile
I0717 01:14:08.443046 46310 start.go:297] selected driver: kvm2
I0717 01:14:08.443075 46310 start.go:901] validating driver "kvm2" against &{Name:pause-388106 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.2 ClusterName:pause-388106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu
-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 01:14:08.443204 46310 start.go:912] status for kvm2: {Installed:true Healthy:false Running:false NeedsImprovement:false Error:/usr/bin/virsh domcapabilities --virttype kvm timed out Reason: Fix:Check that the libvirtd service is running and the socket is ready Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/ Version:}
I0717 01:14:08.444701 46310 out.go:177]
W0717 01:14:08.446099 46310 out.go:239] X Exiting due to PROVIDER_KVM2_NOT_RUNNING: /usr/bin/virsh domcapabilities --virttype kvm timed out
X Exiting due to PROVIDER_KVM2_NOT_RUNNING: /usr/bin/virsh domcapabilities --virttype kvm timed out
W0717 01:14:08.446164 46310 out.go:239] * Suggestion: Check that the libvirtd service is running and the socket is ready
* Suggestion: Check that the libvirtd service is running and the socket is ready
W0717 01:14:08.446211 46310 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
I0717 01:14:08.447502 46310 out.go:177]
** /stderr **
pause_test.go:94: failed to second start a running minikube with args: "out/minikube-linux-amd64 start -p pause-388106 --alsologtostderr -v=1 --driver=kvm2 " : exit status 63
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-388106 -n pause-388106
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p pause-388106 logs -n 25
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
==> Audit <==
|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| ssh | -p cilium-851161 sudo cat | cilium-851161 | jenkins | v1.33.1 | 17 Jul 24 01:10 UTC | |
| | /lib/systemd/system/containerd.service | | | | | |
| ssh | -p cilium-851161 sudo cat | cilium-851161 | jenkins | v1.33.1 | 17 Jul 24 01:10 UTC | |
| | /etc/containerd/config.toml | | | | | |
| ssh | -p cilium-851161 sudo | cilium-851161 | jenkins | v1.33.1 | 17 Jul 24 01:10 UTC | |
| | containerd config dump | | | | | |
| ssh | -p cilium-851161 sudo | cilium-851161 | jenkins | v1.33.1 | 17 Jul 24 01:10 UTC | |
| | systemctl status crio --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p cilium-851161 sudo | cilium-851161 | jenkins | v1.33.1 | 17 Jul 24 01:10 UTC | |
| | systemctl cat crio --no-pager | | | | | |
| ssh | -p cilium-851161 sudo find | cilium-851161 | jenkins | v1.33.1 | 17 Jul 24 01:10 UTC | |
| | /etc/crio -type f -exec sh -c | | | | | |
| | 'echo {}; cat {}' \; | | | | | |
| ssh | -p cilium-851161 sudo crio | cilium-851161 | jenkins | v1.33.1 | 17 Jul 24 01:10 UTC | |
| | config | | | | | |
| delete | -p cilium-851161 | cilium-851161 | jenkins | v1.33.1 | 17 Jul 24 01:10 UTC | 17 Jul 24 01:10 UTC |
| start | -p kubernetes-upgrade-070449 | kubernetes-upgrade-070449 | jenkins | v1.33.1 | 17 Jul 24 01:10 UTC | 17 Jul 24 01:11 UTC |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=kvm2 | | | | | |
| ssh | force-systemd-flag-642604 | force-systemd-flag-642604 | jenkins | v1.33.1 | 17 Jul 24 01:11 UTC | 17 Jul 24 01:11 UTC |
| | ssh docker info --format | | | | | |
| | {{.CgroupDriver}} | | | | | |
| delete | -p force-systemd-flag-642604 | force-systemd-flag-642604 | jenkins | v1.33.1 | 17 Jul 24 01:11 UTC | 17 Jul 24 01:11 UTC |
| start | -p stopped-upgrade-275812 | minikube | jenkins | v1.26.0 | 17 Jul 24 01:11 UTC | 17 Jul 24 01:12 UTC |
| | --memory=2200 --vm-driver=kvm2 | | | | | |
| | | | | | | |
| delete | -p offline-docker-607835 | offline-docker-607835 | jenkins | v1.33.1 | 17 Jul 24 01:11 UTC | 17 Jul 24 01:11 UTC |
| start | -p pause-388106 --memory=2048 | pause-388106 | jenkins | v1.33.1 | 17 Jul 24 01:11 UTC | 17 Jul 24 01:14 UTC |
| | --install-addons=false | | | | | |
| | --wait=all --driver=kvm2 | | | | | |
| stop | -p kubernetes-upgrade-070449 | kubernetes-upgrade-070449 | jenkins | v1.33.1 | 17 Jul 24 01:11 UTC | 17 Jul 24 01:12 UTC |
| start | -p kubernetes-upgrade-070449 | kubernetes-upgrade-070449 | jenkins | v1.33.1 | 17 Jul 24 01:12 UTC | 17 Jul 24 01:13 UTC |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.31.0-beta.0 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=kvm2 | | | | | |
| start | -p running-upgrade-644614 | running-upgrade-644614 | jenkins | v1.33.1 | 17 Jul 24 01:12 UTC | 17 Jul 24 01:13 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=kvm2 | | | | | |
| stop | stopped-upgrade-275812 stop | minikube | jenkins | v1.26.0 | 17 Jul 24 01:12 UTC | 17 Jul 24 01:12 UTC |
| start | -p stopped-upgrade-275812 | stopped-upgrade-275812 | jenkins | v1.33.1 | 17 Jul 24 01:12 UTC | 17 Jul 24 01:13 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=kvm2 | | | | | |
| start | -p kubernetes-upgrade-070449 | kubernetes-upgrade-070449 | jenkins | v1.33.1 | 17 Jul 24 01:13 UTC | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --driver=kvm2 | | | | | |
| start | -p kubernetes-upgrade-070449 | kubernetes-upgrade-070449 | jenkins | v1.33.1 | 17 Jul 24 01:13 UTC | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.31.0-beta.0 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=kvm2 | | | | | |
| delete | -p stopped-upgrade-275812 | stopped-upgrade-275812 | jenkins | v1.33.1 | 17 Jul 24 01:13 UTC | 17 Jul 24 01:13 UTC |
| start | -p cert-expiration-863133 | cert-expiration-863133 | jenkins | v1.33.1 | 17 Jul 24 01:13 UTC | |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=kvm2 | | | | | |
| delete | -p running-upgrade-644614 | running-upgrade-644614 | jenkins | v1.33.1 | 17 Jul 24 01:13 UTC | |
| start | -p pause-388106 | pause-388106 | jenkins | v1.33.1 | 17 Jul 24 01:14 UTC | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=kvm2 | | | | | |
|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/07/17 01:14:02
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.22.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0717 01:14:02.387640 46310 out.go:291] Setting OutFile to fd 1 ...
I0717 01:14:02.387874 46310 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 01:14:02.387902 46310 out.go:304] Setting ErrFile to fd 2...
I0717 01:14:02.387918 46310 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 01:14:02.388381 46310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3824/.minikube/bin
I0717 01:14:02.388980 46310 out.go:298] Setting JSON to false
I0717 01:14:02.390058 46310 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3390,"bootTime":1721175452,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0717 01:14:02.390121 46310 start.go:139] virtualization: kvm guest
I0717 01:14:02.392365 46310 out.go:177] * [pause-388106] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
I0717 01:14:02.393763 46310 out.go:177] - MINIKUBE_LOCATION=19264
I0717 01:14:02.393758 46310 notify.go:220] Checking for updates...
I0717 01:14:02.396184 46310 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0717 01:14:02.397534 46310 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19264-3824/kubeconfig
I0717 01:14:02.398846 46310 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3824/.minikube
I0717 01:14:02.400169 46310 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0717 01:14:02.401417 46310 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0717 01:14:02.403057 46310 config.go:182] Loaded profile config "pause-388106": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 01:14:02.403496 46310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0717 01:14:02.403557 46310 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 01:14:02.419030 46310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37123
I0717 01:14:02.419479 46310 main.go:141] libmachine: () Calling .GetVersion
I0717 01:14:02.420215 46310 main.go:141] libmachine: Using API Version 1
I0717 01:14:02.420238 46310 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 01:14:02.420599 46310 main.go:141] libmachine: () Calling .GetMachineName
I0717 01:14:02.420758 46310 main.go:141] libmachine: (pause-388106) Calling .DriverName
I0717 01:14:02.421013 46310 driver.go:392] Setting default libvirt URI to qemu:///system
I0717 01:14:02.421296 46310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0717 01:14:02.421326 46310 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 01:14:02.435956 46310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33931
I0717 01:14:02.436414 46310 main.go:141] libmachine: () Calling .GetVersion
I0717 01:14:02.436886 46310 main.go:141] libmachine: Using API Version 1
I0717 01:14:02.436908 46310 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 01:14:02.437202 46310 main.go:141] libmachine: () Calling .GetMachineName
I0717 01:14:02.437557 46310 main.go:141] libmachine: (pause-388106) Calling .DriverName
I0717 01:14:08.441388 46310 out.go:177] * Using the kvm2 driver based on existing profile
I0717 01:14:08.443046 46310 start.go:297] selected driver: kvm2
I0717 01:14:08.443075 46310 start.go:901] validating driver "kvm2" against &{Name:pause-388106 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.2 ClusterName:pause-388106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu
-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 01:14:08.443204 46310 start.go:912] status for kvm2: {Installed:true Healthy:false Running:false NeedsImprovement:false Error:/usr/bin/virsh domcapabilities --virttype kvm timed out Reason: Fix:Check that the libvirtd service is running and the socket is ready Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/ Version:}
I0717 01:14:08.444701 46310 out.go:177]
W0717 01:14:08.446099 46310 out.go:239] X Exiting due to PROVIDER_KVM2_NOT_RUNNING: /usr/bin/virsh domcapabilities --virttype kvm timed out
W0717 01:14:08.446164 46310 out.go:239] * Suggestion: Check that the libvirtd service is running and the socket is ready
W0717 01:14:08.446211 46310 out.go:239] * Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
I0717 01:14:08.447502 46310 out.go:177]
I0717 01:14:08.940790 45687 main.go:141] libmachine: (kubernetes-upgrade-070449) Calling .DriverName
I0717 01:14:08.940967 45687 kapi.go:59] client config for kubernetes-upgrade-070449: &rest.Config{Host:"https://192.168.61.225:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19264-3824/.minikube/profiles/kubernetes-upgrade-070449/client.crt", KeyFile:"/home/jenkins/minikube-integration/19264-3824/.minikube/profiles/kubernetes-upgrade-070449/client.key", CAFile:"/home/jenkins/minikube-integration/19264-3824/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d01f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0717 01:14:08.941387 45687 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-070449"
W0717 01:14:08.941414 45687 addons.go:243] addon default-storageclass should already be in state true
I0717 01:14:08.941446 45687 host.go:66] Checking if "kubernetes-upgrade-070449" exists ...
I0717 01:14:08.941838 45687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0717 01:14:08.941883 45687 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 01:14:08.943764 45687 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
==> Docker <==
Jul 17 01:12:59 pause-388106 dockerd[1197]: time="2024-07-17T01:12:59.639087793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 01:12:59 pause-388106 dockerd[1197]: time="2024-07-17T01:12:59.639266327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 01:12:59 pause-388106 dockerd[1197]: time="2024-07-17T01:12:59.644643607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 17 01:12:59 pause-388106 dockerd[1197]: time="2024-07-17T01:12:59.644851760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 17 01:12:59 pause-388106 dockerd[1197]: time="2024-07-17T01:12:59.644878589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 01:12:59 pause-388106 dockerd[1197]: time="2024-07-17T01:12:59.645084434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 01:13:20 pause-388106 dockerd[1197]: time="2024-07-17T01:13:20.696643517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 17 01:13:20 pause-388106 dockerd[1197]: time="2024-07-17T01:13:20.704078999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 17 01:13:20 pause-388106 dockerd[1197]: time="2024-07-17T01:13:20.704148491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 01:13:20 pause-388106 dockerd[1197]: time="2024-07-17T01:13:20.704438839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 01:13:20 pause-388106 dockerd[1197]: time="2024-07-17T01:13:20.839012903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 17 01:13:20 pause-388106 dockerd[1197]: time="2024-07-17T01:13:20.839330429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 17 01:13:20 pause-388106 dockerd[1197]: time="2024-07-17T01:13:20.839574960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 01:13:20 pause-388106 dockerd[1197]: time="2024-07-17T01:13:20.839760495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 01:13:20 pause-388106 cri-dockerd[1088]: time="2024-07-17T01:13:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/41393f69a62ae6e8f232ef6e0f09f3836104ce833d0c7ec410bdf87cb79ef093/resolv.conf as [nameserver 192.168.122.1]"
Jul 17 01:13:20 pause-388106 cri-dockerd[1088]: time="2024-07-17T01:13:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f6fad5374ea5a7f4c7d870477ccb6f4b3733ad1cd2354e8f1e27b0bb09b54f04/resolv.conf as [nameserver 192.168.122.1]"
Jul 17 01:13:21 pause-388106 dockerd[1197]: time="2024-07-17T01:13:21.105123595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 17 01:13:21 pause-388106 dockerd[1197]: time="2024-07-17T01:13:21.105242947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 17 01:13:21 pause-388106 dockerd[1197]: time="2024-07-17T01:13:21.105264283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 01:13:21 pause-388106 dockerd[1197]: time="2024-07-17T01:13:21.105442768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 01:13:21 pause-388106 dockerd[1197]: time="2024-07-17T01:13:21.158356347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 17 01:13:21 pause-388106 dockerd[1197]: time="2024-07-17T01:13:21.158744297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 17 01:13:21 pause-388106 dockerd[1197]: time="2024-07-17T01:13:21.158845942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 01:13:21 pause-388106 dockerd[1197]: time="2024-07-17T01:13:21.159047764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 01:13:25 pause-388106 cri-dockerd[1088]: time="2024-07-17T01:13:25Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
458717a56c6da cbb01a7bd410d 48 seconds ago Running coredns 0 41393f69a62ae coredns-7db6d8ff4d-ht2hc
2f713e9e8c2b9 53c535741fb44 49 seconds ago Running kube-proxy 0 f6fad5374ea5a kube-proxy-tq46p
7a9603e186b69 e874818b3caac About a minute ago Running kube-controller-manager 0 fb54e2dce5d6e kube-controller-manager-pause-388106
066a862fea1fc 3861cfcd7c04c About a minute ago Running etcd 0 c652a70321454 etcd-pause-388106
58a4a7f0579b2 7820c83aa1394 About a minute ago Running kube-scheduler 0 b25b8dc7f6b6a kube-scheduler-pause-388106
9ed9d72540483 56ce0fd9fb532 About a minute ago Running kube-apiserver 0 17c9c003d9ca2 kube-apiserver-pause-388106
==> coredns [458717a56c6d] <==
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:57940 - 5123 "HINFO IN 8294968019194409221.7809212883871525722. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.082188685s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: Trace[1992796762]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 01:13:21.434) (total time: 30003ms):
Trace[1992796762]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (01:13:51.437)
Trace[1992796762]: [30.003216334s] [30.003216334s] END
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: Trace[313136446]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 01:13:21.436) (total time: 30001ms):
Trace[313136446]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (01:13:51.437)
Trace[313136446]: [30.001480858s] [30.001480858s] END
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: Trace[207147647]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 01:13:21.434) (total time: 30004ms):
Trace[207147647]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (01:13:51.438)
Trace[207147647]: [30.004701003s] [30.004701003s] END
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
==> describe nodes <==
Name: pause-388106
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-388106
kubernetes.io/os=linux
minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
minikube.k8s.io/name=pause-388106
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_07_17T01_13_06_0700
minikube.k8s.io/version=v1.33.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 17 Jul 2024 01:13:02 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-388106
AcquireTime: <unset>
RenewTime: Wed, 17 Jul 2024 01:14:06 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 17 Jul 2024 01:13:25 +0000 Wed, 17 Jul 2024 01:13:00 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 17 Jul 2024 01:13:25 +0000 Wed, 17 Jul 2024 01:13:00 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 17 Jul 2024 01:13:25 +0000 Wed, 17 Jul 2024 01:13:00 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 17 Jul 2024 01:13:25 +0000 Wed, 17 Jul 2024 01:13:06 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.217
Hostname: pause-388106
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2015704Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2015704Ki
pods: 110
System Info:
Machine ID: fa91b65ddb984a00b3040f590c8bd1fd
System UUID: fa91b65d-db98-4a00-b304-0f590c8bd1fd
Boot ID: d0f90c09-50b3-4f03-afee-e99555dd518a
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.0.3
Kubelet Version: v1.30.2
Kube-Proxy Version: v1.30.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-7db6d8ff4d-ht2hc 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 50s
kube-system etcd-pause-388106 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 64s
kube-system kube-apiserver-pause-388106 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 65s
kube-system kube-controller-manager-pause-388106 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 66s
kube-system kube-proxy-tq46p 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 50s
kube-system kube-scheduler-pause-388106 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 64s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 48s kube-proxy
Normal NodeHasSufficientMemory 71s (x8 over 71s) kubelet Node pause-388106 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 71s (x8 over 71s) kubelet Node pause-388106 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 71s (x7 over 71s) kubelet Node pause-388106 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 71s kubelet Updated Node Allocatable limit across pods
Normal Starting 64s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 64s kubelet Node pause-388106 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 64s kubelet Node pause-388106 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 64s kubelet Node pause-388106 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 64s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 63s kubelet Node pause-388106 status is now: NodeReady
Normal RegisteredNode 51s node-controller Node pause-388106 event: Registered Node pause-388106 in Controller
==> dmesg <==
[ +4.678533] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +12.361118] systemd-fstab-generator[502]: Ignoring "noauto" option for root device
[ +0.073074] kauditd_printk_skb: 1 callbacks suppressed
[ +0.076268] systemd-fstab-generator[514]: Ignoring "noauto" option for root device
[ +2.297781] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
[ +0.344724] systemd-fstab-generator[800]: Ignoring "noauto" option for root device
[ +0.183921] systemd-fstab-generator[812]: Ignoring "noauto" option for root device
[ +0.175073] systemd-fstab-generator[826]: Ignoring "noauto" option for root device
[ +2.224410] kauditd_printk_skb: 189 callbacks suppressed
[ +0.363870] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
[ +0.159342] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
[ +0.150984] systemd-fstab-generator[1065]: Ignoring "noauto" option for root device
[ +0.179070] systemd-fstab-generator[1080]: Ignoring "noauto" option for root device
[ +4.335851] systemd-fstab-generator[1183]: Ignoring "noauto" option for root device
[ +0.069771] kauditd_printk_skb: 138 callbacks suppressed
[ +2.729038] systemd-fstab-generator[1425]: Ignoring "noauto" option for root device
[ +3.333214] kauditd_printk_skb: 82 callbacks suppressed
[ +1.451848] systemd-fstab-generator[1617]: Ignoring "noauto" option for root device
[Jul17 01:13] systemd-fstab-generator[2023]: Ignoring "noauto" option for root device
[ +0.125197] kauditd_printk_skb: 65 callbacks suppressed
[ +13.952785] systemd-fstab-generator[2252]: Ignoring "noauto" option for root device
[ +0.117458] kauditd_printk_skb: 12 callbacks suppressed
[ +41.382177] kauditd_printk_skb: 69 callbacks suppressed
==> etcd [066a862fea1f] <==
{"level":"info","ts":"2024-07-17T01:13:00.038628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a09c9983ac28f1fd elected leader a09c9983ac28f1fd at term 2"}
{"level":"info","ts":"2024-07-17T01:13:00.043546Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-07-17T01:13:00.045341Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a09c9983ac28f1fd","local-member-attributes":"{Name:pause-388106 ClientURLs:[https://192.168.39.217:2379]}","request-path":"/0/members/a09c9983ac28f1fd/attributes","cluster-id":"8f39477865362797","publish-timeout":"7s"}
{"level":"info","ts":"2024-07-17T01:13:00.046672Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-07-17T01:13:00.050289Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-07-17T01:13:00.057019Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-07-17T01:13:00.057383Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-07-17T01:13:00.057566Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8f39477865362797","local-member-id":"a09c9983ac28f1fd","cluster-version":"3.5"}
{"level":"info","ts":"2024-07-17T01:13:00.058051Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-07-17T01:13:00.058384Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-07-17T01:13:00.061056Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.217:2379"}
{"level":"info","ts":"2024-07-17T01:13:00.072664Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-07-17T01:13:12.724613Z","caller":"traceutil/trace.go:171","msg":"trace[540504511] transaction","detail":"{read_only:false; response_revision:321; number_of_response:1; }","duration":"118.860889ms","start":"2024-07-17T01:13:12.605721Z","end":"2024-07-17T01:13:12.724581Z","steps":["trace[540504511] 'process raft request' (duration: 118.643954ms)"],"step_count":1}
{"level":"info","ts":"2024-07-17T01:13:13.684054Z","caller":"traceutil/trace.go:171","msg":"trace[2102142448] transaction","detail":"{read_only:false; response_revision:322; number_of_response:1; }","duration":"174.396605ms","start":"2024-07-17T01:13:13.509582Z","end":"2024-07-17T01:13:13.683979Z","steps":["trace[2102142448] 'process raft request' (duration: 174.179316ms)"],"step_count":1}
{"level":"warn","ts":"2024-07-17T01:13:33.492552Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.997939ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-ht2hc\" ","response":"range_response_count:1 size:4967"}
{"level":"info","ts":"2024-07-17T01:13:33.493311Z","caller":"traceutil/trace.go:171","msg":"trace[1940101998] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-ht2hc; range_end:; response_count:1; response_revision:410; }","duration":"131.995051ms","start":"2024-07-17T01:13:33.361284Z","end":"2024-07-17T01:13:33.493279Z","steps":["trace[1940101998] 'range keys from in-memory index tree' (duration: 130.892524ms)"],"step_count":1}
{"level":"warn","ts":"2024-07-17T01:13:34.641123Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.488202ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17437252479039383064 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.217\" mod_revision:406 > success:<request_put:<key:\"/registry/masterleases/192.168.39.217\" value_size:67 lease:8213880442184607253 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.217\" > >>","response":"size:16"}
{"level":"info","ts":"2024-07-17T01:13:34.641586Z","caller":"traceutil/trace.go:171","msg":"trace[1553627594] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"241.287403ms","start":"2024-07-17T01:13:34.400283Z","end":"2024-07-17T01:13:34.641571Z","steps":["trace[1553627594] 'process raft request' (duration: 124.702444ms)","trace[1553627594] 'compare' (duration: 115.320861ms)"],"step_count":2}
{"level":"warn","ts":"2024-07-17T01:13:34.64225Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.196458ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-388106\" ","response":"range_response_count:1 size:4346"}
{"level":"info","ts":"2024-07-17T01:13:34.642641Z","caller":"traceutil/trace.go:171","msg":"trace[1432977954] range","detail":"{range_begin:/registry/minions/pause-388106; range_end:; response_count:1; response_revision:411; }","duration":"240.617804ms","start":"2024-07-17T01:13:34.402012Z","end":"2024-07-17T01:13:34.64263Z","steps":["trace[1432977954] 'agreement among raft nodes before linearized reading' (duration: 240.123113ms)"],"step_count":1}
{"level":"info","ts":"2024-07-17T01:13:34.642075Z","caller":"traceutil/trace.go:171","msg":"trace[1829112761] linearizableReadLoop","detail":"{readStateIndex:431; appliedIndex:430; }","duration":"239.470522ms","start":"2024-07-17T01:13:34.402042Z","end":"2024-07-17T01:13:34.641512Z","steps":["trace[1829112761] 'read index received' (duration: 123.006541ms)","trace[1829112761] 'applied index is now lower than readState.Index' (duration: 116.462471ms)"],"step_count":2}
{"level":"warn","ts":"2024-07-17T01:13:35.273948Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"412.523487ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-ht2hc\" ","response":"range_response_count:1 size:4967"}
{"level":"info","ts":"2024-07-17T01:13:35.274016Z","caller":"traceutil/trace.go:171","msg":"trace[1682763205] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-ht2hc; range_end:; response_count:1; response_revision:411; }","duration":"412.641201ms","start":"2024-07-17T01:13:34.861362Z","end":"2024-07-17T01:13:35.274003Z","steps":["trace[1682763205] 'range keys from in-memory index tree' (duration: 412.308733ms)"],"step_count":1}
{"level":"warn","ts":"2024-07-17T01:13:35.27406Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T01:13:34.861343Z","time spent":"412.697489ms","remote":"127.0.0.1:46384","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":4991,"request content":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-ht2hc\" "}
{"level":"info","ts":"2024-07-17T01:13:36.110192Z","caller":"traceutil/trace.go:171","msg":"trace[367546498] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"158.78352ms","start":"2024-07-17T01:13:35.951387Z","end":"2024-07-17T01:13:36.11017Z","steps":["trace[367546498] 'process raft request' (duration: 158.366013ms)"],"step_count":1}
==> kernel <==
01:14:09 up 1 min, 0 users, load average: 0.77, 0.33, 0.12
Linux pause-388106 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [9ed9d7254048] <==
I0717 01:13:02.319980 1 policy_source.go:224] refreshing policies
E0717 01:13:02.337126 1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
I0717 01:13:02.354185 1 shared_informer.go:320] Caches are synced for crd-autoregister
I0717 01:13:02.354483 1 aggregator.go:165] initial CRD sync complete...
I0717 01:13:02.354745 1 autoregister_controller.go:141] Starting autoregister controller
I0717 01:13:02.354895 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0717 01:13:02.354999 1 cache.go:39] Caches are synced for autoregister controller
E0717 01:13:02.363320 1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
I0717 01:13:02.399580 1 controller.go:615] quota admission added evaluator for: namespaces
I0717 01:13:02.546791 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0717 01:13:03.200923 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0717 01:13:03.212697 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0717 01:13:03.212980 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0717 01:13:04.022394 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0717 01:13:04.095307 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0717 01:13:04.215935 1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
W0717 01:13:04.225495 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217]
I0717 01:13:04.227915 1 controller.go:615] quota admission added evaluator for: endpoints
I0717 01:13:04.236625 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0717 01:13:04.280242 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0717 01:13:05.370312 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0717 01:13:05.400870 1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
I0717 01:13:05.439238 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0717 01:13:19.257153 1 controller.go:615] quota admission added evaluator for: replicasets.apps
I0717 01:13:19.298548 1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
==> kube-controller-manager [7a9603e186b6] <==
I0717 01:13:19.023531 1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
I0717 01:13:19.025136 1 shared_informer.go:320] Caches are synced for job
I0717 01:13:19.029491 1 shared_informer.go:320] Caches are synced for ephemeral
I0717 01:13:19.030309 1 shared_informer.go:320] Caches are synced for PVC protection
I0717 01:13:19.040378 1 shared_informer.go:320] Caches are synced for bootstrap_signer
I0717 01:13:19.130592 1 shared_informer.go:320] Caches are synced for daemon sets
I0717 01:13:19.144531 1 shared_informer.go:320] Caches are synced for resource quota
I0717 01:13:19.176745 1 shared_informer.go:320] Caches are synced for stateful set
I0717 01:13:19.182267 1 shared_informer.go:320] Caches are synced for resource quota
I0717 01:13:19.625673 1 shared_informer.go:320] Caches are synced for garbage collector
I0717 01:13:19.625802 1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
I0717 01:13:19.666043 1 shared_informer.go:320] Caches are synced for garbage collector
I0717 01:13:19.668294 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="351.953845ms"
I0717 01:13:19.722541 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.681985ms"
I0717 01:13:19.725758 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.026µs"
I0717 01:13:19.772511 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="124.677µs"
I0717 01:13:19.904361 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.581176ms"
I0717 01:13:19.917988 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.364949ms"
I0717 01:13:19.918937 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="873.777µs"
I0717 01:13:21.684465 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="94.259µs"
I0717 01:13:21.699258 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.986µs"
I0717 01:13:21.708252 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="108.842µs"
I0717 01:13:21.726284 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.842µs"
I0717 01:14:00.583960 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="13.088591ms"
I0717 01:14:00.585599 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.44µs"
==> kube-proxy [2f713e9e8c2b] <==
I0717 01:13:21.309141 1 server_linux.go:69] "Using iptables proxy"
I0717 01:13:21.324506 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.217"]
I0717 01:13:21.404894 1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
I0717 01:13:21.404944 1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0717 01:13:21.404963 1 server_linux.go:165] "Using iptables Proxier"
I0717 01:13:21.413983 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0717 01:13:21.414813 1 server.go:872] "Version info" version="v1.30.2"
I0717 01:13:21.414841 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0717 01:13:21.418675 1 config.go:192] "Starting service config controller"
I0717 01:13:21.418779 1 shared_informer.go:313] Waiting for caches to sync for service config
I0717 01:13:21.419261 1 config.go:319] "Starting node config controller"
I0717 01:13:21.424789 1 shared_informer.go:313] Waiting for caches to sync for node config
I0717 01:13:21.425139 1 shared_informer.go:320] Caches are synced for node config
I0717 01:13:21.419393 1 config.go:101] "Starting endpoint slice config controller"
I0717 01:13:21.425348 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0717 01:13:21.425384 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0717 01:13:21.519182 1 shared_informer.go:320] Caches are synced for service config
==> kube-scheduler [58a4a7f0579b] <==
E0717 01:13:02.305333 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0717 01:13:02.306293 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0717 01:13:03.111378 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0717 01:13:03.111457 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0717 01:13:03.137025 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0717 01:13:03.137071 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0717 01:13:03.144734 1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0717 01:13:03.144850 1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0717 01:13:03.145641 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0717 01:13:03.145790 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0717 01:13:03.153532 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0717 01:13:03.153686 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0717 01:13:03.229864 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0717 01:13:03.229918 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0717 01:13:03.369964 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0717 01:13:03.370035 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0717 01:13:03.526364 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0717 01:13:03.526447 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0717 01:13:03.575974 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0717 01:13:03.576044 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0717 01:13:03.630814 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0717 01:13:03.630870 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0717 01:13:03.727772 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0717 01:13:03.727819 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
I0717 01:13:06.389774 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jul 17 01:13:19 pause-388106 kubelet[2030]: I0717 01:13:19.549925 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbdn7\" (UniqueName: \"kubernetes.io/projected/cb7f7bb7-b068-40d4-ba02-3857115bc334-kube-api-access-fbdn7\") pod \"kube-proxy-tq46p\" (UID: \"cb7f7bb7-b068-40d4-ba02-3857115bc334\") " pod="kube-system/kube-proxy-tq46p"
Jul 17 01:13:19 pause-388106 kubelet[2030]: I0717 01:13:19.549948 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cb7f7bb7-b068-40d4-ba02-3857115bc334-kube-proxy\") pod \"kube-proxy-tq46p\" (UID: \"cb7f7bb7-b068-40d4-ba02-3857115bc334\") " pod="kube-system/kube-proxy-tq46p"
Jul 17 01:13:19 pause-388106 kubelet[2030]: I0717 01:13:19.604751 2030 topology_manager.go:215] "Topology Admit Handler" podUID="2e39d239-4597-4da6-827e-ea28a60e5c5e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nnt9s"
Jul 17 01:13:19 pause-388106 kubelet[2030]: I0717 01:13:19.646344 2030 topology_manager.go:215] "Topology Admit Handler" podUID="85ca6880-712f-4bd7-9302-013bba5fc11c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ht2hc"
Jul 17 01:13:19 pause-388106 kubelet[2030]: I0717 01:13:19.650587 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e39d239-4597-4da6-827e-ea28a60e5c5e-config-volume\") pod \"coredns-7db6d8ff4d-nnt9s\" (UID: \"2e39d239-4597-4da6-827e-ea28a60e5c5e\") " pod="kube-system/coredns-7db6d8ff4d-nnt9s"
Jul 17 01:13:19 pause-388106 kubelet[2030]: I0717 01:13:19.650794 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4snz\" (UniqueName: \"kubernetes.io/projected/2e39d239-4597-4da6-827e-ea28a60e5c5e-kube-api-access-c4snz\") pod \"coredns-7db6d8ff4d-nnt9s\" (UID: \"2e39d239-4597-4da6-827e-ea28a60e5c5e\") " pod="kube-system/coredns-7db6d8ff4d-nnt9s"
Jul 17 01:13:19 pause-388106 kubelet[2030]: I0717 01:13:19.754052 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx2cl\" (UniqueName: \"kubernetes.io/projected/85ca6880-712f-4bd7-9302-013bba5fc11c-kube-api-access-cx2cl\") pod \"coredns-7db6d8ff4d-ht2hc\" (UID: \"85ca6880-712f-4bd7-9302-013bba5fc11c\") " pod="kube-system/coredns-7db6d8ff4d-ht2hc"
Jul 17 01:13:19 pause-388106 kubelet[2030]: I0717 01:13:19.754265 2030 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85ca6880-712f-4bd7-9302-013bba5fc11c-config-volume\") pod \"coredns-7db6d8ff4d-ht2hc\" (UID: \"85ca6880-712f-4bd7-9302-013bba5fc11c\") " pod="kube-system/coredns-7db6d8ff4d-ht2hc"
Jul 17 01:13:19 pause-388106 kubelet[2030]: E0717 01:13:19.888161 2030 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-c4snz], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/coredns-7db6d8ff4d-nnt9s" podUID="2e39d239-4597-4da6-827e-ea28a60e5c5e"
Jul 17 01:13:20 pause-388106 kubelet[2030]: I0717 01:13:20.763103 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4snz\" (UniqueName: \"kubernetes.io/projected/2e39d239-4597-4da6-827e-ea28a60e5c5e-kube-api-access-c4snz\") pod \"2e39d239-4597-4da6-827e-ea28a60e5c5e\" (UID: \"2e39d239-4597-4da6-827e-ea28a60e5c5e\") "
Jul 17 01:13:20 pause-388106 kubelet[2030]: I0717 01:13:20.763219 2030 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e39d239-4597-4da6-827e-ea28a60e5c5e-config-volume\") pod \"2e39d239-4597-4da6-827e-ea28a60e5c5e\" (UID: \"2e39d239-4597-4da6-827e-ea28a60e5c5e\") "
Jul 17 01:13:20 pause-388106 kubelet[2030]: I0717 01:13:20.764015 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e39d239-4597-4da6-827e-ea28a60e5c5e-config-volume" (OuterVolumeSpecName: "config-volume") pod "2e39d239-4597-4da6-827e-ea28a60e5c5e" (UID: "2e39d239-4597-4da6-827e-ea28a60e5c5e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Jul 17 01:13:20 pause-388106 kubelet[2030]: I0717 01:13:20.765350 2030 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e39d239-4597-4da6-827e-ea28a60e5c5e-kube-api-access-c4snz" (OuterVolumeSpecName: "kube-api-access-c4snz") pod "2e39d239-4597-4da6-827e-ea28a60e5c5e" (UID: "2e39d239-4597-4da6-827e-ea28a60e5c5e"). InnerVolumeSpecName "kube-api-access-c4snz". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jul 17 01:13:20 pause-388106 kubelet[2030]: I0717 01:13:20.864844 2030 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-c4snz\" (UniqueName: \"kubernetes.io/projected/2e39d239-4597-4da6-827e-ea28a60e5c5e-kube-api-access-c4snz\") on node \"pause-388106\" DevicePath \"\""
Jul 17 01:13:20 pause-388106 kubelet[2030]: I0717 01:13:20.866768 2030 reconciler_common.go:289] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e39d239-4597-4da6-827e-ea28a60e5c5e-config-volume\") on node \"pause-388106\" DevicePath \"\""
Jul 17 01:13:21 pause-388106 kubelet[2030]: I0717 01:13:21.682587 2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tq46p" podStartSLOduration=2.682545123 podStartE2EDuration="2.682545123s" podCreationTimestamp="2024-07-17 01:13:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 01:13:21.651281447 +0000 UTC m=+16.469328148" watchObservedRunningTime="2024-07-17 01:13:21.682545123 +0000 UTC m=+16.500591824"
Jul 17 01:13:23 pause-388106 kubelet[2030]: I0717 01:13:23.334708 2030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e39d239-4597-4da6-827e-ea28a60e5c5e" path="/var/lib/kubelet/pods/2e39d239-4597-4da6-827e-ea28a60e5c5e/volumes"
Jul 17 01:13:25 pause-388106 kubelet[2030]: I0717 01:13:25.601795 2030 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Jul 17 01:13:25 pause-388106 kubelet[2030]: I0717 01:13:25.604150 2030 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Jul 17 01:14:00 pause-388106 kubelet[2030]: I0717 01:14:00.567712 2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ht2hc" podStartSLOduration=41.567694072 podStartE2EDuration="41.567694072s" podCreationTimestamp="2024-07-17 01:13:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 01:13:21.724092029 +0000 UTC m=+16.542138730" watchObservedRunningTime="2024-07-17 01:14:00.567694072 +0000 UTC m=+55.385740772"
Jul 17 01:14:05 pause-388106 kubelet[2030]: E0717 01:14:05.386049 2030 iptables.go:577] "Could not set up iptables canary" err=<
Jul 17 01:14:05 pause-388106 kubelet[2030]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
Jul 17 01:14:05 pause-388106 kubelet[2030]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Jul 17 01:14:05 pause-388106 kubelet[2030]: Perhaps ip6tables or your kernel needs to be upgraded.
Jul 17 01:14:05 pause-388106 kubelet[2030]: > table="nat" chain="KUBE-KUBELET-CANARY"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-388106 -n pause-388106
helpers_test.go:261: (dbg) Run: kubectl --context pause-388106 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (8.19s)