=== RUN TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run: out/minikube-linux-amd64 start -p pause-927729 --alsologtostderr -v=1 --driver=kvm2
E0223 05:04:58.132786 10999 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3952/.minikube/profiles/gvisor-019881/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-927729 --alsologtostderr -v=1 --driver=kvm2 : (1m17.254901817s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got:
-- stdout --
* [pause-927729] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=15909
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/15909-3952/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3952/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on existing profile
* Starting control plane node pause-927729 in cluster pause-927729
* Updating the running kvm2 "pause-927729" VM ...
* Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
* Configuring bridge CNI (Container Networking Interface) ...
* Enabled addons:
* Verifying Kubernetes components...
* Done! kubectl is now configured to use "pause-927729" cluster and "default" namespace by default
-- /stdout --
** stderr **
I0223 05:04:40.397675 33397 out.go:296] Setting OutFile to fd 1 ...
I0223 05:04:40.397812 33397 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0223 05:04:40.397821 33397 out.go:309] Setting ErrFile to fd 2...
I0223 05:04:40.397826 33397 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0223 05:04:40.397980 33397 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3952/.minikube/bin
I0223 05:04:40.398620 33397 out.go:303] Setting JSON to false
I0223 05:04:40.399492 33397 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2834,"bootTime":1677125847,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0223 05:04:40.399545 33397 start.go:135] virtualization: kvm guest
I0223 05:04:40.402056 33397 out.go:177] * [pause-927729] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0223 05:04:40.404252 33397 out.go:177] - MINIKUBE_LOCATION=15909
I0223 05:04:40.406018 33397 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0223 05:04:40.404603 33397 notify.go:220] Checking for updates...
I0223 05:04:40.409256 33397 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15909-3952/kubeconfig
I0223 05:04:40.411190 33397 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3952/.minikube
I0223 05:04:40.412951 33397 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0223 05:04:40.414976 33397 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0223 05:04:40.417006 33397 config.go:182] Loaded profile config "pause-927729": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0223 05:04:40.417545 33397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0223 05:04:40.417631 33397 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 05:04:40.434209 33397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34473
I0223 05:04:40.434838 33397 main.go:141] libmachine: () Calling .GetVersion
I0223 05:04:40.435370 33397 main.go:141] libmachine: Using API Version 1
I0223 05:04:40.435393 33397 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 05:04:40.435782 33397 main.go:141] libmachine: () Calling .GetMachineName
I0223 05:04:40.435986 33397 main.go:141] libmachine: (pause-927729) Calling .DriverName
I0223 05:04:40.436251 33397 driver.go:365] Setting default libvirt URI to qemu:///system
I0223 05:04:40.436622 33397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0223 05:04:40.436662 33397 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 05:04:40.453969 33397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39747
I0223 05:04:40.455836 33397 main.go:141] libmachine: () Calling .GetVersion
I0223 05:04:40.456882 33397 main.go:141] libmachine: Using API Version 1
I0223 05:04:40.456904 33397 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 05:04:40.457327 33397 main.go:141] libmachine: () Calling .GetMachineName
I0223 05:04:40.457537 33397 main.go:141] libmachine: (pause-927729) Calling .DriverName
I0223 05:04:40.495654 33397 out.go:177] * Using the kvm2 driver based on existing profile
I0223 05:04:40.497125 33397 start.go:296] selected driver: kvm2
I0223 05:04:40.497148 33397 start.go:857] validating driver "kvm2" against &{Name:pause-927729 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.26.1 ClusterName:pause-927729 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.54 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security
-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0223 05:04:40.497305 33397 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0223 05:04:40.497638 33397 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0223 05:04:40.497736 33397 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15909-3952/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0223 05:04:40.513212 33397 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
I0223 05:04:40.513873 33397 cni.go:84] Creating CNI manager for ""
I0223 05:04:40.513897 33397 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0223 05:04:40.513905 33397 start_flags.go:319] config:
{Name:pause-927729 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:pause-927729 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.54 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-c
reds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0223 05:04:40.514028 33397 iso.go:125] acquiring lock: {Name:mkaa0353ce7f481d2e27b6d0b7fef8218290f843 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0223 05:04:40.516681 33397 out.go:177] * Starting control plane node pause-927729 in cluster pause-927729
I0223 05:04:40.518166 33397 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0223 05:04:40.518208 33397 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15909-3952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
I0223 05:04:40.518228 33397 cache.go:57] Caching tarball of preloaded images
I0223 05:04:40.518309 33397 preload.go:174] Found /home/jenkins/minikube-integration/15909-3952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0223 05:04:40.518318 33397 cache.go:60] Finished verifying existence of preloaded tar for v1.26.1 on docker
I0223 05:04:40.518435 33397 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3952/.minikube/profiles/pause-927729/config.json ...
I0223 05:04:40.518597 33397 cache.go:193] Successfully downloaded all kic artifacts
I0223 05:04:40.518616 33397 start.go:364] acquiring machines lock for pause-927729: {Name:mk80232e5ac6be7873ac7f01ae80ef9193e4980e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0223 05:04:42.563528 33397 start.go:368] acquired machines lock for "pause-927729" in 2.044892146s
I0223 05:04:42.563583 33397 start.go:96] Skipping create...Using existing machine configuration
I0223 05:04:42.563589 33397 fix.go:55] fixHost starting:
I0223 05:04:42.563975 33397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0223 05:04:42.564011 33397 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 05:04:42.582365 33397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34791
I0223 05:04:42.582770 33397 main.go:141] libmachine: () Calling .GetVersion
I0223 05:04:42.583254 33397 main.go:141] libmachine: Using API Version 1
I0223 05:04:42.583274 33397 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 05:04:42.583546 33397 main.go:141] libmachine: () Calling .GetMachineName
I0223 05:04:42.583890 33397 main.go:141] libmachine: (pause-927729) Calling .DriverName
I0223 05:04:42.584008 33397 main.go:141] libmachine: (pause-927729) Calling .GetState
I0223 05:04:42.585733 33397 fix.go:103] recreateIfNeeded on pause-927729: state=Running err=<nil>
W0223 05:04:42.585768 33397 fix.go:129] unexpected machine state, will restart: <nil>
I0223 05:04:42.587959 33397 out.go:177] * Updating the running kvm2 "pause-927729" VM ...
I0223 05:04:42.589474 33397 machine.go:88] provisioning docker machine ...
I0223 05:04:42.589494 33397 main.go:141] libmachine: (pause-927729) Calling .DriverName
I0223 05:04:42.589713 33397 main.go:141] libmachine: (pause-927729) Calling .GetMachineName
I0223 05:04:42.589834 33397 buildroot.go:166] provisioning hostname "pause-927729"
I0223 05:04:42.589850 33397 main.go:141] libmachine: (pause-927729) Calling .GetMachineName
I0223 05:04:42.589966 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHHostname
I0223 05:04:42.592576 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:42.593056 33397 main.go:141] libmachine: (pause-927729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:c7:cd", ip: ""} in network mk-pause-927729: {Iface:virbr2 ExpiryTime:2023-02-23 06:03:34 +0000 UTC Type:0 Mac:52:54:00:33:c7:cd Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-927729 Clientid:01:52:54:00:33:c7:cd}
I0223 05:04:42.593079 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined IP address 192.168.50.54 and MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:42.593207 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHPort
I0223 05:04:42.593394 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHKeyPath
I0223 05:04:42.593706 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHKeyPath
I0223 05:04:42.593871 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHUsername
I0223 05:04:42.594043 33397 main.go:141] libmachine: Using SSH client type: native
I0223 05:04:42.594649 33397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.50.54 22 <nil> <nil>}
I0223 05:04:42.594663 33397 main.go:141] libmachine: About to run SSH command:
sudo hostname pause-927729 && echo "pause-927729" | sudo tee /etc/hostname
I0223 05:04:42.776228 33397 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-927729
I0223 05:04:42.776262 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHHostname
I0223 05:04:42.779438 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:42.779801 33397 main.go:141] libmachine: (pause-927729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:c7:cd", ip: ""} in network mk-pause-927729: {Iface:virbr2 ExpiryTime:2023-02-23 06:03:34 +0000 UTC Type:0 Mac:52:54:00:33:c7:cd Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-927729 Clientid:01:52:54:00:33:c7:cd}
I0223 05:04:42.779833 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined IP address 192.168.50.54 and MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:42.780139 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHPort
I0223 05:04:42.780311 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHKeyPath
I0223 05:04:42.780496 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHKeyPath
I0223 05:04:42.780665 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHUsername
I0223 05:04:42.780877 33397 main.go:141] libmachine: Using SSH client type: native
I0223 05:04:42.781469 33397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.50.54 22 <nil> <nil>}
I0223 05:04:42.781500 33397 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\spause-927729' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-927729/g' /etc/hosts;
else
echo '127.0.1.1 pause-927729' | sudo tee -a /etc/hosts;
fi
fi
I0223 05:04:42.938440 33397 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0223 05:04:42.938464 33397 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-3952/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-3952/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-3952/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-3952/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-3952/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-3952/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-3952/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-3952/.minikube}
I0223 05:04:42.938480 33397 buildroot.go:174] setting up certificates
I0223 05:04:42.938487 33397 provision.go:83] configureAuth start
I0223 05:04:42.938502 33397 main.go:141] libmachine: (pause-927729) Calling .GetMachineName
I0223 05:04:42.938772 33397 main.go:141] libmachine: (pause-927729) Calling .GetIP
I0223 05:04:42.942256 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:42.942672 33397 main.go:141] libmachine: (pause-927729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:c7:cd", ip: ""} in network mk-pause-927729: {Iface:virbr2 ExpiryTime:2023-02-23 06:03:34 +0000 UTC Type:0 Mac:52:54:00:33:c7:cd Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-927729 Clientid:01:52:54:00:33:c7:cd}
I0223 05:04:42.942700 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined IP address 192.168.50.54 and MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:42.942995 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHHostname
I0223 05:04:42.945545 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:42.945951 33397 main.go:141] libmachine: (pause-927729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:c7:cd", ip: ""} in network mk-pause-927729: {Iface:virbr2 ExpiryTime:2023-02-23 06:03:34 +0000 UTC Type:0 Mac:52:54:00:33:c7:cd Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-927729 Clientid:01:52:54:00:33:c7:cd}
I0223 05:04:42.945980 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined IP address 192.168.50.54 and MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:42.946131 33397 provision.go:138] copyHostCerts
I0223 05:04:42.946209 33397 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3952/.minikube/ca.pem, removing ...
I0223 05:04:42.946221 33397 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3952/.minikube/ca.pem
I0223 05:04:42.946287 33397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3952/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-3952/.minikube/ca.pem (1082 bytes)
I0223 05:04:42.946388 33397 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3952/.minikube/cert.pem, removing ...
I0223 05:04:42.946398 33397 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3952/.minikube/cert.pem
I0223 05:04:42.946426 33397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3952/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-3952/.minikube/cert.pem (1123 bytes)
I0223 05:04:42.946495 33397 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3952/.minikube/key.pem, removing ...
I0223 05:04:42.946510 33397 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3952/.minikube/key.pem
I0223 05:04:42.946537 33397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3952/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-3952/.minikube/key.pem (1679 bytes)
I0223 05:04:42.946592 33397 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-3952/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-3952/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-3952/.minikube/certs/ca-key.pem org=jenkins.pause-927729 san=[192.168.50.54 192.168.50.54 localhost 127.0.0.1 minikube pause-927729]
I0223 05:04:43.353209 33397 provision.go:172] copyRemoteCerts
I0223 05:04:43.353290 33397 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0223 05:04:43.353318 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHHostname
I0223 05:04:43.356784 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:43.357420 33397 main.go:141] libmachine: (pause-927729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:c7:cd", ip: ""} in network mk-pause-927729: {Iface:virbr2 ExpiryTime:2023-02-23 06:03:34 +0000 UTC Type:0 Mac:52:54:00:33:c7:cd Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-927729 Clientid:01:52:54:00:33:c7:cd}
I0223 05:04:43.357458 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined IP address 192.168.50.54 and MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:43.357844 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHPort
I0223 05:04:43.358040 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHKeyPath
I0223 05:04:43.358160 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHUsername
I0223 05:04:43.358292 33397 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-3952/.minikube/machines/pause-927729/id_rsa Username:docker}
I0223 05:04:43.469203 33397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3952/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0223 05:04:43.516725 33397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3952/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
I0223 05:04:43.550548 33397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3952/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0223 05:04:43.603829 33397 provision.go:86] duration metric: configureAuth took 665.332319ms
I0223 05:04:43.603855 33397 buildroot.go:189] setting minikube options for container-runtime
I0223 05:04:43.604027 33397 config.go:182] Loaded profile config "pause-927729": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0223 05:04:43.604048 33397 main.go:141] libmachine: (pause-927729) Calling .DriverName
I0223 05:04:43.604402 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHHostname
I0223 05:04:43.607746 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:43.608444 33397 main.go:141] libmachine: (pause-927729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:c7:cd", ip: ""} in network mk-pause-927729: {Iface:virbr2 ExpiryTime:2023-02-23 06:03:34 +0000 UTC Type:0 Mac:52:54:00:33:c7:cd Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-927729 Clientid:01:52:54:00:33:c7:cd}
I0223 05:04:43.608477 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined IP address 192.168.50.54 and MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:43.608696 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHPort
I0223 05:04:43.608958 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHKeyPath
I0223 05:04:43.609130 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHKeyPath
I0223 05:04:43.609270 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHUsername
I0223 05:04:43.609431 33397 main.go:141] libmachine: Using SSH client type: native
I0223 05:04:43.609867 33397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.50.54 22 <nil> <nil>}
I0223 05:04:43.609883 33397 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0223 05:04:43.752361 33397 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0223 05:04:43.752477 33397 buildroot.go:70] root file system type: tmpfs
I0223 05:04:43.752632 33397 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0223 05:04:43.752658 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHHostname
I0223 05:04:43.758318 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:43.758980 33397 main.go:141] libmachine: (pause-927729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:c7:cd", ip: ""} in network mk-pause-927729: {Iface:virbr2 ExpiryTime:2023-02-23 06:03:34 +0000 UTC Type:0 Mac:52:54:00:33:c7:cd Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-927729 Clientid:01:52:54:00:33:c7:cd}
I0223 05:04:43.759011 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined IP address 192.168.50.54 and MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:43.759174 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHPort
I0223 05:04:43.759323 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHKeyPath
I0223 05:04:43.759451 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHKeyPath
I0223 05:04:43.759558 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHUsername
I0223 05:04:43.759691 33397 main.go:141] libmachine: Using SSH client type: native
I0223 05:04:43.760357 33397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.50.54 22 <nil> <nil>}
I0223 05:04:43.760462 33397 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0223 05:04:43.917414 33397 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0223 05:04:43.917480 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHHostname
I0223 05:04:43.921215 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:43.921886 33397 main.go:141] libmachine: (pause-927729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:c7:cd", ip: ""} in network mk-pause-927729: {Iface:virbr2 ExpiryTime:2023-02-23 06:03:34 +0000 UTC Type:0 Mac:52:54:00:33:c7:cd Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-927729 Clientid:01:52:54:00:33:c7:cd}
I0223 05:04:43.921925 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined IP address 192.168.50.54 and MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:43.922435 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHPort
I0223 05:04:43.922694 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHKeyPath
I0223 05:04:43.922851 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHKeyPath
I0223 05:04:43.923033 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHUsername
I0223 05:04:43.923203 33397 main.go:141] libmachine: Using SSH client type: native
I0223 05:04:43.923807 33397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.50.54 22 <nil> <nil>}
I0223 05:04:43.923836 33397 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0223 05:04:44.117853 33397 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0223 05:04:44.117931 33397 machine.go:91] provisioned docker machine in 1.52844204s
I0223 05:04:44.117954 33397 start.go:300] post-start starting for "pause-927729" (driver="kvm2")
I0223 05:04:44.117970 33397 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0223 05:04:44.118030 33397 main.go:141] libmachine: (pause-927729) Calling .DriverName
I0223 05:04:44.118412 33397 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0223 05:04:44.118480 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHHostname
I0223 05:04:44.122346 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:44.122424 33397 main.go:141] libmachine: (pause-927729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:c7:cd", ip: ""} in network mk-pause-927729: {Iface:virbr2 ExpiryTime:2023-02-23 06:03:34 +0000 UTC Type:0 Mac:52:54:00:33:c7:cd Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-927729 Clientid:01:52:54:00:33:c7:cd}
I0223 05:04:44.122454 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined IP address 192.168.50.54 and MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:44.122824 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHPort
I0223 05:04:44.123046 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHKeyPath
I0223 05:04:44.123224 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHUsername
I0223 05:04:44.123370 33397 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-3952/.minikube/machines/pause-927729/id_rsa Username:docker}
I0223 05:04:44.244093 33397 ssh_runner.go:195] Run: cat /etc/os-release
I0223 05:04:44.250822 33397 info.go:137] Remote host: Buildroot 2021.02.12
I0223 05:04:44.250848 33397 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-3952/.minikube/addons for local assets ...
I0223 05:04:44.250929 33397 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-3952/.minikube/files for local assets ...
I0223 05:04:44.251024 33397 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-3952/.minikube/files/etc/ssl/certs/109992.pem -> 109992.pem in /etc/ssl/certs
I0223 05:04:44.251134 33397 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0223 05:04:44.266072 33397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3952/.minikube/files/etc/ssl/certs/109992.pem --> /etc/ssl/certs/109992.pem (1708 bytes)
I0223 05:04:44.317821 33397 start.go:303] post-start completed in 199.846122ms
I0223 05:04:44.317893 33397 fix.go:57] fixHost completed within 1.754304367s
I0223 05:04:44.317918 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHHostname
I0223 05:04:44.321554 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:44.321961 33397 main.go:141] libmachine: (pause-927729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:c7:cd", ip: ""} in network mk-pause-927729: {Iface:virbr2 ExpiryTime:2023-02-23 06:03:34 +0000 UTC Type:0 Mac:52:54:00:33:c7:cd Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-927729 Clientid:01:52:54:00:33:c7:cd}
I0223 05:04:44.322017 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined IP address 192.168.50.54 and MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:44.322420 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHPort
I0223 05:04:44.322620 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHKeyPath
I0223 05:04:44.322773 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHKeyPath
I0223 05:04:44.322902 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHUsername
I0223 05:04:44.323046 33397 main.go:141] libmachine: Using SSH client type: native
I0223 05:04:44.323603 33397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil> [] 0s} 192.168.50.54 22 <nil> <nil>}
I0223 05:04:44.323634 33397 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0223 05:04:44.457752 33397 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677128684.454279848
I0223 05:04:44.457772 33397 fix.go:207] guest clock: 1677128684.454279848
I0223 05:04:44.457781 33397 fix.go:220] Guest: 2023-02-23 05:04:44.454279848 +0000 UTC Remote: 2023-02-23 05:04:44.317903592 +0000 UTC m=+3.981541524 (delta=136.376256ms)
I0223 05:04:44.457800 33397 fix.go:191] guest clock delta is within tolerance: 136.376256ms
I0223 05:04:44.457805 33397 start.go:83] releasing machines lock for "pause-927729", held for 1.894245965s
I0223 05:04:44.457822 33397 main.go:141] libmachine: (pause-927729) Calling .DriverName
I0223 05:04:44.458082 33397 main.go:141] libmachine: (pause-927729) Calling .GetIP
I0223 05:04:44.461101 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:44.461692 33397 main.go:141] libmachine: (pause-927729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:c7:cd", ip: ""} in network mk-pause-927729: {Iface:virbr2 ExpiryTime:2023-02-23 06:03:34 +0000 UTC Type:0 Mac:52:54:00:33:c7:cd Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-927729 Clientid:01:52:54:00:33:c7:cd}
I0223 05:04:44.461722 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined IP address 192.168.50.54 and MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:44.462152 33397 main.go:141] libmachine: (pause-927729) Calling .DriverName
I0223 05:04:44.463358 33397 main.go:141] libmachine: (pause-927729) Calling .DriverName
I0223 05:04:44.463572 33397 main.go:141] libmachine: (pause-927729) Calling .DriverName
I0223 05:04:44.463671 33397 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0223 05:04:44.463717 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHHostname
I0223 05:04:44.463809 33397 ssh_runner.go:195] Run: cat /version.json
I0223 05:04:44.463834 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHHostname
I0223 05:04:44.468257 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:44.468995 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:44.469691 33397 main.go:141] libmachine: (pause-927729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:c7:cd", ip: ""} in network mk-pause-927729: {Iface:virbr2 ExpiryTime:2023-02-23 06:03:34 +0000 UTC Type:0 Mac:52:54:00:33:c7:cd Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-927729 Clientid:01:52:54:00:33:c7:cd}
I0223 05:04:44.469719 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined IP address 192.168.50.54 and MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:44.469939 33397 main.go:141] libmachine: (pause-927729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:c7:cd", ip: ""} in network mk-pause-927729: {Iface:virbr2 ExpiryTime:2023-02-23 06:03:34 +0000 UTC Type:0 Mac:52:54:00:33:c7:cd Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-927729 Clientid:01:52:54:00:33:c7:cd}
I0223 05:04:44.469958 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined IP address 192.168.50.54 and MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:44.470264 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHPort
I0223 05:04:44.470449 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHKeyPath
I0223 05:04:44.470721 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHUsername
I0223 05:04:44.470881 33397 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-3952/.minikube/machines/pause-927729/id_rsa Username:docker}
I0223 05:04:44.471825 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHPort
I0223 05:04:44.471993 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHKeyPath
I0223 05:04:44.472155 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHUsername
I0223 05:04:44.472299 33397 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-3952/.minikube/machines/pause-927729/id_rsa Username:docker}
I0223 05:04:44.622256 33397 ssh_runner.go:195] Run: systemctl --version
I0223 05:04:44.643285 33397 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0223 05:04:44.651513 33397 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0223 05:04:44.651656 33397 ssh_runner.go:195] Run: which cri-dockerd
I0223 05:04:44.659979 33397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0223 05:04:44.674119 33397 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0223 05:04:44.724201 33397 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0223 05:04:44.739150 33397 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0223 05:04:44.739179 33397 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0223 05:04:44.739293 33397 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0223 05:04:44.800919 33397 docker.go:630] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0223 05:04:44.800942 33397 docker.go:560] Images already preloaded, skipping extraction
I0223 05:04:44.800953 33397 start.go:485] detecting cgroup driver to use...
I0223 05:04:44.801081 33397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0223 05:04:44.828855 33397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0223 05:04:44.844529 33397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0223 05:04:44.859620 33397 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0223 05:04:44.859705 33397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0223 05:04:44.882965 33397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0223 05:04:44.903508 33397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0223 05:04:44.918228 33397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0223 05:04:44.933650 33397 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0223 05:04:44.951515 33397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0223 05:04:44.968974 33397 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0223 05:04:44.981990 33397 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0223 05:04:44.994035 33397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 05:04:45.223663 33397 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0223 05:04:45.247580 33397 start.go:485] detecting cgroup driver to use...
I0223 05:04:45.247661 33397 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0223 05:04:45.271459 33397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0223 05:04:45.293353 33397 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0223 05:04:45.328859 33397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0223 05:04:45.350205 33397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0223 05:04:45.364019 33397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0223 05:04:45.385809 33397 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0223 05:04:45.564685 33397 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0223 05:04:45.762863 33397 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0223 05:04:45.762898 33397 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0223 05:04:45.786530 33397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 05:04:45.956874 33397 ssh_runner.go:195] Run: sudo systemctl restart docker
I0223 05:04:54.147908 33397 ssh_runner.go:235] Completed: sudo systemctl restart docker: (8.190992184s)
I0223 05:04:54.147979 33397 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0223 05:04:54.348460 33397 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0223 05:04:54.642282 33397 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0223 05:04:54.909644 33397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 05:04:55.102957 33397 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0223 05:04:55.178564 33397 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0223 05:04:55.178639 33397 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0223 05:04:55.189438 33397 start.go:553] Will wait 60s for crictl version
I0223 05:04:55.189509 33397 ssh_runner.go:195] Run: which crictl
I0223 05:04:55.195953 33397 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0223 05:04:55.938520 33397 start.go:569] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.23
RuntimeApiVersion: v1alpha2
I0223 05:04:55.938591 33397 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0223 05:04:56.030556 33397 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0223 05:04:56.097911 33397 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
I0223 05:04:56.097970 33397 main.go:141] libmachine: (pause-927729) Calling .GetIP
I0223 05:04:56.101242 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:56.101879 33397 main.go:141] libmachine: (pause-927729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:c7:cd", ip: ""} in network mk-pause-927729: {Iface:virbr2 ExpiryTime:2023-02-23 06:03:34 +0000 UTC Type:0 Mac:52:54:00:33:c7:cd Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-927729 Clientid:01:52:54:00:33:c7:cd}
I0223 05:04:56.101911 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined IP address 192.168.50.54 and MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:04:56.102455 33397 ssh_runner.go:195] Run: grep 192.168.50.1 host.minikube.internal$ /etc/hosts
I0223 05:04:56.108605 33397 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0223 05:04:56.108683 33397 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0223 05:04:56.181503 33397 docker.go:630] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0223 05:04:56.181534 33397 docker.go:560] Images already preloaded, skipping extraction
I0223 05:04:56.181599 33397 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0223 05:04:56.256304 33397 docker.go:630] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0223 05:04:56.256334 33397 cache_images.go:84] Images are preloaded, skipping loading
I0223 05:04:56.256403 33397 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0223 05:04:56.418625 33397 cni.go:84] Creating CNI manager for ""
I0223 05:04:56.418720 33397 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0223 05:04:56.418753 33397 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0223 05:04:56.418797 33397 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.54 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-927729 NodeName:pause-927729 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0223 05:04:56.419004 33397 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.50.54
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "pause-927729"
kubeletExtraArgs:
node-ip: 192.168.50.54
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.50.54"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0223 05:04:56.419168 33397 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-927729 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.54
[Install]
config:
{KubernetesVersion:v1.26.1 ClusterName:pause-927729 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0223 05:04:56.419262 33397 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
I0223 05:04:56.458323 33397 binaries.go:44] Found k8s binaries, skipping transfer
I0223 05:04:56.458492 33397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0223 05:04:56.500367 33397 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (445 bytes)
I0223 05:04:56.546322 33397 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0223 05:04:56.587535 33397 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2091 bytes)
I0223 05:04:56.623609 33397 ssh_runner.go:195] Run: grep 192.168.50.54 control-plane.minikube.internal$ /etc/hosts
I0223 05:04:56.651962 33397 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-3952/.minikube/profiles/pause-927729 for IP: 192.168.50.54
I0223 05:04:56.651996 33397 certs.go:186] acquiring lock for shared ca certs: {Name:mk7362bfc600d1c025406f326f2e7612c8991e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 05:04:56.652180 33397 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-3952/.minikube/ca.key
I0223 05:04:56.652241 33397 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-3952/.minikube/proxy-client-ca.key
I0223 05:04:56.652329 33397 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15909-3952/.minikube/profiles/pause-927729/client.key
I0223 05:04:56.652424 33397 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15909-3952/.minikube/profiles/pause-927729/apiserver.key.8f08e58c
I0223 05:04:56.652482 33397 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15909-3952/.minikube/profiles/pause-927729/proxy-client.key
I0223 05:04:56.652656 33397 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3952/.minikube/certs/home/jenkins/minikube-integration/15909-3952/.minikube/certs/10999.pem (1338 bytes)
W0223 05:04:56.652706 33397 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-3952/.minikube/certs/home/jenkins/minikube-integration/15909-3952/.minikube/certs/10999_empty.pem, impossibly tiny 0 bytes
I0223 05:04:56.652722 33397 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3952/.minikube/certs/home/jenkins/minikube-integration/15909-3952/.minikube/certs/ca-key.pem (1675 bytes)
I0223 05:04:56.652782 33397 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3952/.minikube/certs/home/jenkins/minikube-integration/15909-3952/.minikube/certs/ca.pem (1082 bytes)
I0223 05:04:56.652817 33397 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3952/.minikube/certs/home/jenkins/minikube-integration/15909-3952/.minikube/certs/cert.pem (1123 bytes)
I0223 05:04:56.652850 33397 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3952/.minikube/certs/home/jenkins/minikube-integration/15909-3952/.minikube/certs/key.pem (1679 bytes)
I0223 05:04:56.652906 33397 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3952/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-3952/.minikube/files/etc/ssl/certs/109992.pem (1708 bytes)
I0223 05:04:56.653485 33397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3952/.minikube/profiles/pause-927729/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0223 05:04:56.764406 33397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3952/.minikube/profiles/pause-927729/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0223 05:04:56.824702 33397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3952/.minikube/profiles/pause-927729/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0223 05:04:56.885673 33397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3952/.minikube/profiles/pause-927729/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0223 05:04:56.935415 33397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3952/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0223 05:04:56.967051 33397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3952/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0223 05:04:56.995106 33397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3952/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0223 05:04:57.018404 33397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3952/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0223 05:04:57.063752 33397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3952/.minikube/certs/10999.pem --> /usr/share/ca-certificates/10999.pem (1338 bytes)
I0223 05:04:57.110724 33397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3952/.minikube/files/etc/ssl/certs/109992.pem --> /usr/share/ca-certificates/109992.pem (1708 bytes)
I0223 05:04:57.157449 33397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3952/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0223 05:04:57.232920 33397 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0223 05:04:57.282651 33397 ssh_runner.go:195] Run: openssl version
I0223 05:04:57.291388 33397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109992.pem && ln -fs /usr/share/ca-certificates/109992.pem /etc/ssl/certs/109992.pem"
I0223 05:04:57.324662 33397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109992.pem
I0223 05:04:57.337417 33397 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 04:27 /usr/share/ca-certificates/109992.pem
I0223 05:04:57.337489 33397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109992.pem
I0223 05:04:57.344939 33397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109992.pem /etc/ssl/certs/3ec20f2e.0"
I0223 05:04:57.354963 33397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0223 05:04:57.368512 33397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0223 05:04:57.373773 33397 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 04:23 /usr/share/ca-certificates/minikubeCA.pem
I0223 05:04:57.373828 33397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0223 05:04:57.382494 33397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0223 05:04:57.393833 33397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10999.pem && ln -fs /usr/share/ca-certificates/10999.pem /etc/ssl/certs/10999.pem"
I0223 05:04:57.417881 33397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10999.pem
I0223 05:04:57.424500 33397 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 04:27 /usr/share/ca-certificates/10999.pem
I0223 05:04:57.424562 33397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10999.pem
I0223 05:04:57.436115 33397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10999.pem /etc/ssl/certs/51391683.0"
I0223 05:04:57.451788 33397 kubeadm.go:401] StartCluster: {Name:pause-927729 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
26.1 ClusterName:pause-927729 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.54 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false port
ainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0223 05:04:57.451981 33397 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0223 05:04:57.502432 33397 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0223 05:04:57.533274 33397 kubeadm.go:416] found existing configuration files, will attempt cluster restart
I0223 05:04:57.533351 33397 kubeadm.go:633] restartCluster start
I0223 05:04:57.533437 33397 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0223 05:04:57.554708 33397 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0223 05:04:57.555740 33397 kubeconfig.go:92] found "pause-927729" server: "https://192.168.50.54:8443"
I0223 05:04:57.557454 33397 kapi.go:59] client config for pause-927729: &rest.Config{Host:"https://192.168.50.54:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3952/.minikube/profiles/pause-927729/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3952/.minikube/profiles/pause-927729/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3952/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0223 05:04:57.558544 33397 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0223 05:04:57.613560 33397 api_server.go:165] Checking apiserver status ...
I0223 05:04:57.613631 33397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0223 05:04:57.653962 33397 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0223 05:04:58.154605 33397 api_server.go:165] Checking apiserver status ...
I0223 05:04:58.154681 33397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 05:04:58.190560 33397 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5640/cgroup
I0223 05:04:58.245068 33397 api_server.go:181] apiserver freezer: "9:freezer:/kubepods/burstable/podbd0bfb5506ac279b7edaa233ac164070/c8fa27ad88bdb9841461fa2caa4b47344739f297c6f721179b7ea417d24bc9f6"
I0223 05:04:58.245150 33397 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podbd0bfb5506ac279b7edaa233ac164070/c8fa27ad88bdb9841461fa2caa4b47344739f297c6f721179b7ea417d24bc9f6/freezer.state
I0223 05:04:58.286530 33397 api_server.go:203] freezer state: "THAWED"
I0223 05:04:58.286554 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:04:58.287211 33397 api_server.go:268] stopped: https://192.168.50.54:8443/healthz: Get "https://192.168.50.54:8443/healthz": dial tcp 192.168.50.54:8443: connect: connection refused
I0223 05:04:58.287282 33397 retry.go:31] will retry after 238.13924ms: state is "Stopped"
I0223 05:04:58.525485 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:04:58.526242 33397 api_server.go:268] stopped: https://192.168.50.54:8443/healthz: Get "https://192.168.50.54:8443/healthz": dial tcp 192.168.50.54:8443: connect: connection refused
I0223 05:04:58.526284 33397 retry.go:31] will retry after 265.900036ms: state is "Stopped"
I0223 05:04:58.792771 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:05:03.793729 33397 api_server.go:268] stopped: https://192.168.50.54:8443/healthz: Get "https://192.168.50.54:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0223 05:05:03.793779 33397 retry.go:31] will retry after 374.888373ms: state is "Stopped"
I0223 05:05:04.169272 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:05:09.170400 33397 api_server.go:268] stopped: https://192.168.50.54:8443/healthz: Get "https://192.168.50.54:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0223 05:05:09.170437 33397 retry.go:31] will retry after 569.153764ms: state is "Stopped"
I0223 05:05:09.739762 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:05:14.741430 33397 api_server.go:268] stopped: https://192.168.50.54:8443/healthz: Get "https://192.168.50.54:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0223 05:05:14.741473 33397 api_server.go:165] Checking apiserver status ...
I0223 05:05:14.741531 33397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 05:05:14.778639 33397 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5640/cgroup
I0223 05:05:14.796166 33397 api_server.go:181] apiserver freezer: "9:freezer:/kubepods/burstable/podbd0bfb5506ac279b7edaa233ac164070/c8fa27ad88bdb9841461fa2caa4b47344739f297c6f721179b7ea417d24bc9f6"
I0223 05:05:14.796248 33397 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podbd0bfb5506ac279b7edaa233ac164070/c8fa27ad88bdb9841461fa2caa4b47344739f297c6f721179b7ea417d24bc9f6/freezer.state
I0223 05:05:14.816066 33397 api_server.go:203] freezer state: "THAWED"
I0223 05:05:14.816095 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:05:18.859812 33397 api_server.go:268] stopped: https://192.168.50.54:8443/healthz: Get "https://192.168.50.54:8443/healthz": read tcp 192.168.50.1:35072->192.168.50.54:8443: read: connection reset by peer
I0223 05:05:18.859867 33397 retry.go:31] will retry after 278.618014ms: state is "Stopped"
I0223 05:05:19.139386 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:05:19.140066 33397 api_server.go:268] stopped: https://192.168.50.54:8443/healthz: Get "https://192.168.50.54:8443/healthz": dial tcp 192.168.50.54:8443: connect: connection refused
I0223 05:05:19.140105 33397 retry.go:31] will retry after 275.852881ms: state is "Stopped"
I0223 05:05:19.416346 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:05:19.417000 33397 api_server.go:268] stopped: https://192.168.50.54:8443/healthz: Get "https://192.168.50.54:8443/healthz": dial tcp 192.168.50.54:8443: connect: connection refused
I0223 05:05:19.417038 33397 retry.go:31] will retry after 467.421732ms: state is "Stopped"
I0223 05:05:19.884641 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:05:19.885506 33397 api_server.go:268] stopped: https://192.168.50.54:8443/healthz: Get "https://192.168.50.54:8443/healthz": dial tcp 192.168.50.54:8443: connect: connection refused
I0223 05:05:19.885551 33397 retry.go:31] will retry after 449.529639ms: state is "Stopped"
I0223 05:05:20.336176 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:05:20.336735 33397 api_server.go:268] stopped: https://192.168.50.54:8443/healthz: Get "https://192.168.50.54:8443/healthz": dial tcp 192.168.50.54:8443: connect: connection refused
I0223 05:05:20.336790 33397 retry.go:31] will retry after 634.0502ms: state is "Stopped"
I0223 05:05:20.971890 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:05:20.972621 33397 api_server.go:268] stopped: https://192.168.50.54:8443/healthz: Get "https://192.168.50.54:8443/healthz": dial tcp 192.168.50.54:8443: connect: connection refused
I0223 05:05:20.972658 33397 retry.go:31] will retry after 866.267752ms: state is "Stopped"
I0223 05:05:21.839112 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:05:21.839716 33397 api_server.go:268] stopped: https://192.168.50.54:8443/healthz: Get "https://192.168.50.54:8443/healthz": dial tcp 192.168.50.54:8443: connect: connection refused
I0223 05:05:21.839758 33397 retry.go:31] will retry after 1.062423745s: state is "Stopped"
I0223 05:05:22.902348 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:05:22.903040 33397 api_server.go:268] stopped: https://192.168.50.54:8443/healthz: Get "https://192.168.50.54:8443/healthz": dial tcp 192.168.50.54:8443: connect: connection refused
I0223 05:05:22.903073 33397 retry.go:31] will retry after 1.106887179s: state is "Stopped"
I0223 05:05:24.010296 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:05:24.010988 33397 api_server.go:268] stopped: https://192.168.50.54:8443/healthz: Get "https://192.168.50.54:8443/healthz": dial tcp 192.168.50.54:8443: connect: connection refused
I0223 05:05:24.011031 33397 retry.go:31] will retry after 1.476037087s: state is "Stopped"
I0223 05:05:25.487643 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:05:25.488221 33397 api_server.go:268] stopped: https://192.168.50.54:8443/healthz: Get "https://192.168.50.54:8443/healthz": dial tcp 192.168.50.54:8443: connect: connection refused
I0223 05:05:25.488264 33397 retry.go:31] will retry after 2.017038836s: state is "Stopped"
I0223 05:05:27.507398 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:05:27.508076 33397 api_server.go:268] stopped: https://192.168.50.54:8443/healthz: Get "https://192.168.50.54:8443/healthz": dial tcp 192.168.50.54:8443: connect: connection refused
I0223 05:05:27.508118 33397 retry.go:31] will retry after 2.19961127s: state is "Stopped"
I0223 05:05:29.708945 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:05:29.709553 33397 api_server.go:268] stopped: https://192.168.50.54:8443/healthz: Get "https://192.168.50.54:8443/healthz": dial tcp 192.168.50.54:8443: connect: connection refused
I0223 05:05:29.709594 33397 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
I0223 05:05:29.709601 33397 kubeadm.go:1120] stopping kube-system containers ...
I0223 05:05:29.709648 33397 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0223 05:05:29.749892 33397 docker.go:456] Stopping containers: [fe41c99a30b3 12af3126a2cb 7e5efcad3eed 33018ba60b47 522be95290b4 c8fa27ad88bd 6817b46b014b 4991904c98c5 bb6c25f55c66 f863d7090eaa bc49644ebb60 148e1d336144 72332726c0d9 51ef5361ce89 6fde43f43141 78c62a013a00 e55e3767e886 71991be019dd e1caa976331c d7217e88b443 87da7e264eb8 6da67672b5bd]
I0223 05:05:29.749979 33397 ssh_runner.go:195] Run: docker stop fe41c99a30b3 12af3126a2cb 7e5efcad3eed 33018ba60b47 522be95290b4 c8fa27ad88bd 6817b46b014b 4991904c98c5 bb6c25f55c66 f863d7090eaa bc49644ebb60 148e1d336144 72332726c0d9 51ef5361ce89 6fde43f43141 78c62a013a00 e55e3767e886 71991be019dd e1caa976331c d7217e88b443 87da7e264eb8 6da67672b5bd
I0223 05:05:35.018686 33397 ssh_runner.go:235] Completed: docker stop fe41c99a30b3 12af3126a2cb 7e5efcad3eed 33018ba60b47 522be95290b4 c8fa27ad88bd 6817b46b014b 4991904c98c5 bb6c25f55c66 f863d7090eaa bc49644ebb60 148e1d336144 72332726c0d9 51ef5361ce89 6fde43f43141 78c62a013a00 e55e3767e886 71991be019dd e1caa976331c d7217e88b443 87da7e264eb8 6da67672b5bd: (5.268675893s)
I0223 05:05:35.018746 33397 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0223 05:05:35.068354 33397 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0223 05:05:35.085191 33397 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5643 Feb 23 05:03 /etc/kubernetes/admin.conf
-rw------- 1 root root 5657 Feb 23 05:03 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1987 Feb 23 05:04 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5601 Feb 23 05:03 /etc/kubernetes/scheduler.conf
I0223 05:05:35.085260 33397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0223 05:05:35.098033 33397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0223 05:05:35.107015 33397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0223 05:05:35.117150 33397 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0223 05:05:35.117206 33397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0223 05:05:35.127682 33397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0223 05:05:35.139188 33397 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0223 05:05:35.139246 33397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0223 05:05:35.152909 33397 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0223 05:05:35.166744 33397 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0223 05:05:35.166772 33397 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0223 05:05:35.339435 33397 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0223 05:05:36.171485 33397 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0223 05:05:36.467281 33397 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0223 05:05:36.594610 33397 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0223 05:05:36.749742 33397 api_server.go:51] waiting for apiserver process to appear ...
I0223 05:05:36.749804 33397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 05:05:36.772981 33397 api_server.go:71] duration metric: took 23.245473ms to wait for apiserver process to appear ...
I0223 05:05:36.773009 33397 api_server.go:87] waiting for apiserver healthz status ...
I0223 05:05:36.773021 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:05:41.541684 33397 api_server.go:278] https://192.168.50.54:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0223 05:05:41.541721 33397 api_server.go:102] status: https://192.168.50.54:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0223 05:05:42.042130 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:05:42.047995 33397 api_server.go:278] https://192.168.50.54:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0223 05:05:42.048022 33397 api_server.go:102] status: https://192.168.50.54:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0223 05:05:42.542606 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:05:42.571372 33397 api_server.go:278] https://192.168.50.54:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0223 05:05:42.571401 33397 api_server.go:102] status: https://192.168.50.54:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0223 05:05:43.042348 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:05:43.049359 33397 api_server.go:278] https://192.168.50.54:8443/healthz returned 200:
ok
I0223 05:05:43.059834 33397 api_server.go:140] control plane version: v1.26.1
I0223 05:05:43.059859 33397 api_server.go:130] duration metric: took 6.286843114s to wait for apiserver health ...
I0223 05:05:43.059869 33397 cni.go:84] Creating CNI manager for ""
I0223 05:05:43.059878 33397 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0223 05:05:43.061584 33397 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0223 05:05:43.062999 33397 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0223 05:05:43.085613 33397 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0223 05:05:43.112432 33397 system_pods.go:43] waiting for kube-system pods to appear ...
I0223 05:05:43.124703 33397 system_pods.go:59] 6 kube-system pods found
I0223 05:05:43.124732 33397 system_pods.go:61] "coredns-787d4945fb-cglqd" [1fa88fe5-60f1-431a-86be-e51eef3d0ad2] Running
I0223 05:05:43.124739 33397 system_pods.go:61] "etcd-pause-927729" [8eeb6d86-cfd5-4044-bbea-7cda3b2805c7] Running
I0223 05:05:43.124746 33397 system_pods.go:61] "kube-apiserver-pause-927729" [0659a506-2258-4f0c-a614-1ad5e31b6dd0] Running
I0223 05:05:43.124790 33397 system_pods.go:61] "kube-controller-manager-pause-927729" [8666d42f-2670-4612-b31e-1a090f2b49f3] Running
I0223 05:05:43.124800 33397 system_pods.go:61] "kube-proxy-bxfpq" [e88a33f8-6ea2-4841-8c3f-da34239da2ff] Running
I0223 05:05:43.124808 33397 system_pods.go:61] "kube-scheduler-pause-927729" [72fb1f70-51ea-4744-bae5-73655dd83967] Running
I0223 05:05:43.124819 33397 system_pods.go:74] duration metric: took 12.364026ms to wait for pod list to return data ...
I0223 05:05:43.124832 33397 node_conditions.go:102] verifying NodePressure condition ...
I0223 05:05:43.128521 33397 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0223 05:05:43.128550 33397 node_conditions.go:123] node cpu capacity is 2
I0223 05:05:43.128564 33397 node_conditions.go:105] duration metric: took 3.724856ms to run NodePressure ...
I0223 05:05:43.128582 33397 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0223 05:05:43.516218 33397 kubeadm.go:769] waiting for restarted kubelet to initialise ...
I0223 05:05:43.521768 33397 kubeadm.go:784] kubelet initialised
I0223 05:05:43.521789 33397 kubeadm.go:785] duration metric: took 5.547197ms waiting for restarted kubelet to initialise ...
I0223 05:05:43.521799 33397 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0223 05:05:43.535197 33397 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-cglqd" in "kube-system" namespace to be "Ready" ...
I0223 05:05:43.543795 33397 pod_ready.go:92] pod "coredns-787d4945fb-cglqd" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:43.543813 33397 pod_ready.go:81] duration metric: took 8.596089ms waiting for pod "coredns-787d4945fb-cglqd" in "kube-system" namespace to be "Ready" ...
I0223 05:05:43.543823 33397 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:43.552255 33397 pod_ready.go:92] pod "etcd-pause-927729" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:43.552270 33397 pod_ready.go:81] duration metric: took 8.439831ms waiting for pod "etcd-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:43.552280 33397 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:43.564507 33397 pod_ready.go:92] pod "kube-apiserver-pause-927729" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:43.564522 33397 pod_ready.go:81] duration metric: took 12.236134ms waiting for pod "kube-apiserver-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:43.564530 33397 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:45.611581 33397 pod_ready.go:102] pod "kube-controller-manager-pause-927729" in "kube-system" namespace has status "Ready":"False"
I0223 05:05:47.595550 33397 pod_ready.go:92] pod "kube-controller-manager-pause-927729" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:47.595579 33397 pod_ready.go:81] duration metric: took 4.031042069s waiting for pod "kube-controller-manager-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:47.595591 33397 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bxfpq" in "kube-system" namespace to be "Ready" ...
I0223 05:05:47.602134 33397 pod_ready.go:92] pod "kube-proxy-bxfpq" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:47.602149 33397 pod_ready.go:81] duration metric: took 6.551675ms waiting for pod "kube-proxy-bxfpq" in "kube-system" namespace to be "Ready" ...
I0223 05:05:47.602156 33397 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:49.616137 33397 pod_ready.go:102] pod "kube-scheduler-pause-927729" in "kube-system" namespace has status "Ready":"False"
I0223 05:05:52.115925 33397 pod_ready.go:102] pod "kube-scheduler-pause-927729" in "kube-system" namespace has status "Ready":"False"
I0223 05:05:53.115015 33397 pod_ready.go:92] pod "kube-scheduler-pause-927729" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:53.115044 33397 pod_ready.go:81] duration metric: took 5.512881096s waiting for pod "kube-scheduler-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:53.115053 33397 pod_ready.go:38] duration metric: took 9.593244507s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0223 05:05:53.115070 33397 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0223 05:05:53.127617 33397 ops.go:34] apiserver oom_adj: -16
I0223 05:05:53.127637 33397 kubeadm.go:637] restartCluster took 55.594270394s
I0223 05:05:53.127648 33397 kubeadm.go:403] StartCluster complete in 55.675868722s
I0223 05:05:53.127666 33397 settings.go:142] acquiring lock: {Name:mkdbfbf025d851ad41e5906da8e3f60b2fca69fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 05:05:53.127748 33397 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15909-3952/kubeconfig
I0223 05:05:53.128595 33397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3952/kubeconfig: {Name:mk020a20943d07a23d370631a6a005cb93b2bfc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 05:05:53.128853 33397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0223 05:05:53.129026 33397 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
I0223 05:05:53.129139 33397 config.go:182] Loaded profile config "pause-927729": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0223 05:05:53.130886 33397 out.go:177] * Enabled addons:
I0223 05:05:53.129204 33397 cache.go:107] acquiring lock: {Name:mk54ad0d75bf3dcb90076f913664fe0061ef6c1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0223 05:05:53.129659 33397 kapi.go:59] client config for pause-927729: &rest.Config{Host:"https://192.168.50.54:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3952/.minikube/profiles/pause-927729/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3952/.minikube/profiles/pause-927729/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3952/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0223 05:05:53.132283 33397 addons.go:492] enable addons completed in 3.264211ms: enabled=[]
I0223 05:05:53.132375 33397 cache.go:115] /home/jenkins/minikube-integration/15909-3952/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
I0223 05:05:53.132394 33397 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/15909-3952/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 3.191533ms
I0223 05:05:53.132403 33397 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/15909-3952/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
I0223 05:05:53.132412 33397 cache.go:87] Successfully saved all images to host disk.
I0223 05:05:53.132550 33397 config.go:182] Loaded profile config "pause-927729": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0223 05:05:53.132917 33397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0223 05:05:53.132947 33397 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 05:05:53.135383 33397 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-927729" context rescaled to 1 replicas
I0223 05:05:53.135409 33397 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.54 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0223 05:05:53.137318 33397 out.go:177] * Verifying Kubernetes components...
I0223 05:05:53.138747 33397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0223 05:05:53.149478 33397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43491
I0223 05:05:53.149842 33397 main.go:141] libmachine: () Calling .GetVersion
I0223 05:05:53.150347 33397 main.go:141] libmachine: Using API Version 1
I0223 05:05:53.150369 33397 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 05:05:53.150686 33397 main.go:141] libmachine: () Calling .GetMachineName
I0223 05:05:53.150871 33397 main.go:141] libmachine: (pause-927729) Calling .GetState
I0223 05:05:53.152453 33397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0223 05:05:53.152478 33397 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 05:05:53.166115 33397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41183
I0223 05:05:53.166452 33397 main.go:141] libmachine: () Calling .GetVersion
I0223 05:05:53.166892 33397 main.go:141] libmachine: Using API Version 1
I0223 05:05:53.166912 33397 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 05:05:53.167196 33397 main.go:141] libmachine: () Calling .GetMachineName
I0223 05:05:53.167355 33397 main.go:141] libmachine: (pause-927729) Calling .DriverName
I0223 05:05:53.167521 33397 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0223 05:05:53.167540 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHHostname
I0223 05:05:53.170343 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:05:53.170748 33397 main.go:141] libmachine: (pause-927729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:c7:cd", ip: ""} in network mk-pause-927729: {Iface:virbr2 ExpiryTime:2023-02-23 06:03:34 +0000 UTC Type:0 Mac:52:54:00:33:c7:cd Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-927729 Clientid:01:52:54:00:33:c7:cd}
I0223 05:05:53.170773 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined IP address 192.168.50.54 and MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:05:53.170914 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHPort
I0223 05:05:53.171059 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHKeyPath
I0223 05:05:53.171196 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHUsername
I0223 05:05:53.171298 33397 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-3952/.minikube/machines/pause-927729/id_rsa Username:docker}
I0223 05:05:53.254613 33397 node_ready.go:35] waiting up to 6m0s for node "pause-927729" to be "Ready" ...
I0223 05:05:53.255025 33397 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0223 05:05:53.257660 33397 node_ready.go:49] node "pause-927729" has status "Ready":"True"
I0223 05:05:53.257679 33397 node_ready.go:38] duration metric: took 3.027096ms waiting for node "pause-927729" to be "Ready" ...
I0223 05:05:53.257689 33397 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0223 05:05:53.262482 33397 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-cglqd" in "kube-system" namespace to be "Ready" ...
I0223 05:05:53.267534 33397 pod_ready.go:92] pod "coredns-787d4945fb-cglqd" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:53.267550 33397 pod_ready.go:81] duration metric: took 5.047301ms waiting for pod "coredns-787d4945fb-cglqd" in "kube-system" namespace to be "Ready" ...
I0223 05:05:53.267561 33397 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:53.307130 33397 docker.go:630] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0223 05:05:53.307151 33397 cache_images.go:84] Images are preloaded, skipping loading
I0223 05:05:53.307160 33397 cache_images.go:262] succeeded pushing to: pause-927729
I0223 05:05:53.307164 33397 cache_images.go:263] failed pushing to:
I0223 05:05:53.307190 33397 main.go:141] libmachine: Making call to close driver server
I0223 05:05:53.307210 33397 main.go:141] libmachine: (pause-927729) Calling .Close
I0223 05:05:53.307480 33397 main.go:141] libmachine: Successfully made call to close driver server
I0223 05:05:53.307503 33397 main.go:141] libmachine: Making call to close connection to plugin binary
I0223 05:05:53.307509 33397 main.go:141] libmachine: (pause-927729) DBG | Closing plugin on server side
I0223 05:05:53.307517 33397 main.go:141] libmachine: Making call to close driver server
I0223 05:05:53.307528 33397 main.go:141] libmachine: (pause-927729) Calling .Close
I0223 05:05:53.307799 33397 main.go:141] libmachine: (pause-927729) DBG | Closing plugin on server side
I0223 05:05:53.307840 33397 main.go:141] libmachine: Successfully made call to close driver server
I0223 05:05:53.307850 33397 main.go:141] libmachine: Making call to close connection to plugin binary
I0223 05:05:55.278402 33397 pod_ready.go:102] pod "etcd-pause-927729" in "kube-system" namespace has status "Ready":"False"
I0223 05:05:55.778729 33397 pod_ready.go:92] pod "etcd-pause-927729" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:55.778753 33397 pod_ready.go:81] duration metric: took 2.511184901s waiting for pod "etcd-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:55.778764 33397 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:55.783678 33397 pod_ready.go:92] pod "kube-apiserver-pause-927729" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:55.783698 33397 pod_ready.go:81] duration metric: took 4.924356ms waiting for pod "kube-apiserver-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:55.783705 33397 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:55.920175 33397 pod_ready.go:92] pod "kube-controller-manager-pause-927729" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:55.920200 33397 pod_ready.go:81] duration metric: took 136.488237ms waiting for pod "kube-controller-manager-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:55.920212 33397 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bxfpq" in "kube-system" namespace to be "Ready" ...
I0223 05:05:56.319947 33397 pod_ready.go:92] pod "kube-proxy-bxfpq" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:56.319965 33397 pod_ready.go:81] duration metric: took 399.745807ms waiting for pod "kube-proxy-bxfpq" in "kube-system" namespace to be "Ready" ...
I0223 05:05:56.319974 33397 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:56.720085 33397 pod_ready.go:92] pod "kube-scheduler-pause-927729" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:56.720110 33397 pod_ready.go:81] duration metric: took 400.129893ms waiting for pod "kube-scheduler-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:56.720120 33397 pod_ready.go:38] duration metric: took 3.462416553s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0223 05:05:56.720142 33397 api_server.go:51] waiting for apiserver process to appear ...
I0223 05:05:56.720187 33397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 05:05:56.737268 33397 api_server.go:71] duration metric: took 3.601831215s to wait for apiserver process to appear ...
I0223 05:05:56.737294 33397 api_server.go:87] waiting for apiserver healthz status ...
I0223 05:05:56.737305 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:05:56.744668 33397 api_server.go:278] https://192.168.50.54:8443/healthz returned 200:
ok
I0223 05:05:56.746405 33397 api_server.go:140] control plane version: v1.26.1
I0223 05:05:56.746425 33397 api_server.go:130] duration metric: took 9.125813ms to wait for apiserver health ...
I0223 05:05:56.746435 33397 system_pods.go:43] waiting for kube-system pods to appear ...
I0223 05:05:56.923345 33397 system_pods.go:59] 6 kube-system pods found
I0223 05:05:56.923371 33397 system_pods.go:61] "coredns-787d4945fb-cglqd" [1fa88fe5-60f1-431a-86be-e51eef3d0ad2] Running
I0223 05:05:56.923377 33397 system_pods.go:61] "etcd-pause-927729" [8eeb6d86-cfd5-4044-bbea-7cda3b2805c7] Running
I0223 05:05:56.923381 33397 system_pods.go:61] "kube-apiserver-pause-927729" [0659a506-2258-4f0c-a614-1ad5e31b6dd0] Running
I0223 05:05:56.923385 33397 system_pods.go:61] "kube-controller-manager-pause-927729" [8666d42f-2670-4612-b31e-1a090f2b49f3] Running
I0223 05:05:56.923389 33397 system_pods.go:61] "kube-proxy-bxfpq" [e88a33f8-6ea2-4841-8c3f-da34239da2ff] Running
I0223 05:05:56.923393 33397 system_pods.go:61] "kube-scheduler-pause-927729" [72fb1f70-51ea-4744-bae5-73655dd83967] Running
I0223 05:05:56.923398 33397 system_pods.go:74] duration metric: took 176.958568ms to wait for pod list to return data ...
I0223 05:05:56.923405 33397 default_sa.go:34] waiting for default service account to be created ...
I0223 05:05:57.119680 33397 default_sa.go:45] found service account: "default"
I0223 05:05:57.119703 33397 default_sa.go:55] duration metric: took 196.290064ms for default service account to be created ...
I0223 05:05:57.119712 33397 system_pods.go:116] waiting for k8s-apps to be running ...
I0223 05:05:57.322536 33397 system_pods.go:86] 6 kube-system pods found
I0223 05:05:57.322560 33397 system_pods.go:89] "coredns-787d4945fb-cglqd" [1fa88fe5-60f1-431a-86be-e51eef3d0ad2] Running
I0223 05:05:57.322565 33397 system_pods.go:89] "etcd-pause-927729" [8eeb6d86-cfd5-4044-bbea-7cda3b2805c7] Running
I0223 05:05:57.322569 33397 system_pods.go:89] "kube-apiserver-pause-927729" [0659a506-2258-4f0c-a614-1ad5e31b6dd0] Running
I0223 05:05:57.322573 33397 system_pods.go:89] "kube-controller-manager-pause-927729" [8666d42f-2670-4612-b31e-1a090f2b49f3] Running
I0223 05:05:57.322578 33397 system_pods.go:89] "kube-proxy-bxfpq" [e88a33f8-6ea2-4841-8c3f-da34239da2ff] Running
I0223 05:05:57.322582 33397 system_pods.go:89] "kube-scheduler-pause-927729" [72fb1f70-51ea-4744-bae5-73655dd83967] Running
I0223 05:05:57.322588 33397 system_pods.go:126] duration metric: took 202.871272ms to wait for k8s-apps to be running ...
I0223 05:05:57.322594 33397 system_svc.go:44] waiting for kubelet service to be running ....
I0223 05:05:57.322631 33397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0223 05:05:57.339233 33397 system_svc.go:56] duration metric: took 16.630061ms WaitForService to wait for kubelet.
I0223 05:05:57.339259 33397 kubeadm.go:578] duration metric: took 4.203826911s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0223 05:05:57.339282 33397 node_conditions.go:102] verifying NodePressure condition ...
I0223 05:05:57.520481 33397 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0223 05:05:57.520502 33397 node_conditions.go:123] node cpu capacity is 2
I0223 05:05:57.520512 33397 node_conditions.go:105] duration metric: took 181.223805ms to run NodePressure ...
I0223 05:05:57.520521 33397 start.go:228] waiting for startup goroutines ...
I0223 05:05:57.520527 33397 start.go:233] waiting for cluster config update ...
I0223 05:05:57.520534 33397 start.go:242] writing updated cluster config ...
I0223 05:05:57.520820 33397 ssh_runner.go:195] Run: rm -f paused
I0223 05:05:57.572198 33397 start.go:555] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
I0223 05:05:57.574315 33397 out.go:177] * Done! kubectl is now configured to use "pause-927729" cluster and "default" namespace by default
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-927729 -n pause-927729
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p pause-927729 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-927729 logs -n 25: (1.269138661s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |---------|------------------------------------------------------|---------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|------------------------------------------------------|---------------------|---------|---------|---------------------|---------------------|
| ssh | -p NoKubernetes-302103 sudo | NoKubernetes-302103 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | |
| | systemctl is-active --quiet | | | | | |
| | service kubelet | | | | | |
| stop | -p NoKubernetes-302103 | NoKubernetes-302103 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| start | -p NoKubernetes-302103 | NoKubernetes-302103 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | |
| | --driver=kvm2 | | | | | |
| ssh | -p auto-993481 sudo cat | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | /etc/nsswitch.conf | | | | | |
| ssh | -p auto-993481 sudo cat | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | /etc/hosts | | | | | |
| ssh | -p auto-993481 sudo cat | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | /etc/resolv.conf | | | | | |
| ssh | -p auto-993481 sudo crictl | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | pods | | | | | |
| ssh | -p auto-993481 sudo crictl ps | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | --all | | | | | |
| ssh | -p auto-993481 sudo find | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | /etc/cni -type f -exec sh -c | | | | | |
| | 'echo {}; cat {}' \; | | | | | |
| ssh | -p auto-993481 sudo ip a s | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| ssh | -p auto-993481 sudo ip r s | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| ssh | -p auto-993481 sudo | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | iptables-save | | | | | |
| ssh | -p auto-993481 sudo iptables | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | -t nat -L -n -v | | | | | |
| ssh | -p auto-993481 sudo systemctl | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | status kubelet --all --full | | | | | |
| | --no-pager | | | | | |
| ssh | -p auto-993481 sudo systemctl | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | cat kubelet --no-pager | | | | | |
| ssh | -p auto-993481 sudo journalctl | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | -xeu kubelet --all --full | | | | | |
| | --no-pager | | | | | |
| ssh | -p auto-993481 sudo cat | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | /etc/kubernetes/kubelet.conf | | | | | |
| ssh | -p auto-993481 sudo cat | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | /var/lib/kubelet/config.yaml | | | | | |
| ssh | -p auto-993481 sudo systemctl | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | status docker --all --full | | | | | |
| | --no-pager | | | | | |
| ssh | -p auto-993481 sudo systemctl | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | cat docker --no-pager | | | | | |
| ssh | -p auto-993481 sudo cat | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | /etc/docker/daemon.json | | | | | |
| ssh | -p auto-993481 sudo docker | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | system info | | | | | |
| ssh | -p auto-993481 sudo systemctl | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | |
| | status cri-docker --all --full | | | | | |
| | --no-pager | | | | | |
| ssh | -p auto-993481 sudo systemctl | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | cat cri-docker --no-pager | | | | | |
| ssh | -p auto-993481 sudo cat | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | |
| | /etc/systemd/system/cri-docker.service.d/10-cni.conf | | | | | |
|---------|------------------------------------------------------|---------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/02/23 05:05:45
Running on machine: ubuntu-20-agent-3
Binary: Built with gc go1.20.1 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0223 05:05:45.987795 34516 out.go:296] Setting OutFile to fd 1 ...
I0223 05:05:45.987929 34516 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0223 05:05:45.987934 34516 out.go:309] Setting ErrFile to fd 2...
I0223 05:05:45.987940 34516 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0223 05:05:45.988095 34516 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3952/.minikube/bin
I0223 05:05:45.988970 34516 out.go:303] Setting JSON to false
I0223 05:05:45.990459 34516 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2899,"bootTime":1677125847,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0223 05:05:45.990561 34516 start.go:135] virtualization: kvm guest
I0223 05:05:45.993030 34516 out.go:177] * [NoKubernetes-302103] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0223 05:05:45.994890 34516 out.go:177] - MINIKUBE_LOCATION=15909
I0223 05:05:45.994838 34516 notify.go:220] Checking for updates...
I0223 05:05:45.998354 34516 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0223 05:05:46.000795 34516 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15909-3952/kubeconfig
I0223 05:05:46.003159 34516 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3952/.minikube
I0223 05:05:46.004596 34516 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0223 05:05:46.005923 34516 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0223 05:05:46.007516 34516 config.go:182] Loaded profile config "NoKubernetes-302103": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
I0223 05:05:46.007962 34516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0223 05:05:46.008044 34516 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 05:05:46.024974 34516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43255
I0223 05:05:46.025461 34516 main.go:141] libmachine: () Calling .GetVersion
I0223 05:05:46.026264 34516 main.go:141] libmachine: Using API Version 1
I0223 05:05:46.026283 34516 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 05:05:46.026705 34516 main.go:141] libmachine: () Calling .GetMachineName
I0223 05:05:46.026912 34516 main.go:141] libmachine: (NoKubernetes-302103) Calling .DriverName
I0223 05:05:46.027074 34516 start.go:1652] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
I0223 05:05:46.027100 34516 driver.go:365] Setting default libvirt URI to qemu:///system
I0223 05:05:46.027479 34516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0223 05:05:46.027514 34516 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 05:05:46.046672 34516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36759
I0223 05:05:46.047307 34516 main.go:141] libmachine: () Calling .GetVersion
I0223 05:05:46.047934 34516 main.go:141] libmachine: Using API Version 1
I0223 05:05:46.047944 34516 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 05:05:46.048281 34516 main.go:141] libmachine: () Calling .GetMachineName
I0223 05:05:46.048497 34516 main.go:141] libmachine: (NoKubernetes-302103) Calling .DriverName
I0223 05:05:46.090677 34516 out.go:177] * Using the kvm2 driver based on existing profile
I0223 05:05:46.092357 34516 start.go:296] selected driver: kvm2
I0223 05:05:46.092366 34516 start.go:857] validating driver "kvm2" against &{Name:NoKubernetes-302103 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-302103 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.236 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0223 05:05:46.092476 34516 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0223 05:05:46.092836 34516 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0223 05:05:46.092917 34516 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15909-3952/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0223 05:05:46.109770 34516 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
I0223 05:05:46.110403 34516 cni.go:84] Creating CNI manager for ""
I0223 05:05:46.110416 34516 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0223 05:05:46.110421 34516 start_flags.go:319] config:
{Name:NoKubernetes-302103 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-302103 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.236 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0223 05:05:46.110527 34516 iso.go:125] acquiring lock: {Name:mkaa0353ce7f481d2e27b6d0b7fef8218290f843 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0223 05:05:46.112455 34516 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-302103
I0223 05:05:45.611581 33397 pod_ready.go:102] pod "kube-controller-manager-pause-927729" in "kube-system" namespace has status "Ready":"False"
I0223 05:05:47.595550 33397 pod_ready.go:92] pod "kube-controller-manager-pause-927729" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:47.595579 33397 pod_ready.go:81] duration metric: took 4.031042069s waiting for pod "kube-controller-manager-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:47.595591 33397 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bxfpq" in "kube-system" namespace to be "Ready" ...
I0223 05:05:47.602134 33397 pod_ready.go:92] pod "kube-proxy-bxfpq" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:47.602149 33397 pod_ready.go:81] duration metric: took 6.551675ms waiting for pod "kube-proxy-bxfpq" in "kube-system" namespace to be "Ready" ...
I0223 05:05:47.602156 33397 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:49.616137 33397 pod_ready.go:102] pod "kube-scheduler-pause-927729" in "kube-system" namespace has status "Ready":"False"
I0223 05:05:46.953923 34009 main.go:141] libmachine: (kindnet-993481) DBG | domain kindnet-993481 has defined MAC address 52:54:00:58:13:f8 in network mk-kindnet-993481
I0223 05:05:46.954405 34009 main.go:141] libmachine: (kindnet-993481) DBG | unable to find current IP address of domain kindnet-993481 in network mk-kindnet-993481
I0223 05:05:46.954445 34009 main.go:141] libmachine: (kindnet-993481) DBG | I0223 05:05:46.954368 34211 retry.go:31] will retry after 3.496329206s: waiting for machine to come up
I0223 05:05:50.452749 34009 main.go:141] libmachine: (kindnet-993481) DBG | domain kindnet-993481 has defined MAC address 52:54:00:58:13:f8 in network mk-kindnet-993481
I0223 05:05:50.453158 34009 main.go:141] libmachine: (kindnet-993481) DBG | unable to find current IP address of domain kindnet-993481 in network mk-kindnet-993481
I0223 05:05:50.453182 34009 main.go:141] libmachine: (kindnet-993481) DBG | I0223 05:05:50.453133 34211 retry.go:31] will retry after 2.74437136s: waiting for machine to come up
I0223 05:05:46.113983 34516 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime docker
W0223 05:05:46.144319 34516 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-docker-overlay2-amd64.tar.lz4 status code: 404
I0223 05:05:46.144478 34516 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3952/.minikube/profiles/NoKubernetes-302103/config.json ...
I0223 05:05:46.144694 34516 cache.go:193] Successfully downloaded all kic artifacts
I0223 05:05:46.144714 34516 start.go:364] acquiring machines lock for NoKubernetes-302103: {Name:mk80232e5ac6be7873ac7f01ae80ef9193e4980e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0223 05:05:52.115925 33397 pod_ready.go:102] pod "kube-scheduler-pause-927729" in "kube-system" namespace has status "Ready":"False"
I0223 05:05:53.115015 33397 pod_ready.go:92] pod "kube-scheduler-pause-927729" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:53.115044 33397 pod_ready.go:81] duration metric: took 5.512881096s waiting for pod "kube-scheduler-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:53.115053 33397 pod_ready.go:38] duration metric: took 9.593244507s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0223 05:05:53.115070 33397 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0223 05:05:53.127617 33397 ops.go:34] apiserver oom_adj: -16
I0223 05:05:53.127637 33397 kubeadm.go:637] restartCluster took 55.594270394s
I0223 05:05:53.127648 33397 kubeadm.go:403] StartCluster complete in 55.675868722s
I0223 05:05:53.127666 33397 settings.go:142] acquiring lock: {Name:mkdbfbf025d851ad41e5906da8e3f60b2fca69fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 05:05:53.127748 33397 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15909-3952/kubeconfig
I0223 05:05:53.128595 33397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3952/kubeconfig: {Name:mk020a20943d07a23d370631a6a005cb93b2bfc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 05:05:53.128853 33397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0223 05:05:53.129026 33397 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
I0223 05:05:53.129139 33397 config.go:182] Loaded profile config "pause-927729": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0223 05:05:53.130886 33397 out.go:177] * Enabled addons:
I0223 05:05:53.129204 33397 cache.go:107] acquiring lock: {Name:mk54ad0d75bf3dcb90076f913664fe0061ef6c1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0223 05:05:53.129659 33397 kapi.go:59] client config for pause-927729: &rest.Config{Host:"https://192.168.50.54:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3952/.minikube/profiles/pause-927729/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3952/.minikube/profiles/pause-927729/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3952/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0223 05:05:53.132283 33397 addons.go:492] enable addons completed in 3.264211ms: enabled=[]
I0223 05:05:53.132375 33397 cache.go:115] /home/jenkins/minikube-integration/15909-3952/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
I0223 05:05:53.132394 33397 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/15909-3952/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 3.191533ms
I0223 05:05:53.132403 33397 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/15909-3952/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
I0223 05:05:53.132412 33397 cache.go:87] Successfully saved all images to host disk.
I0223 05:05:53.132550 33397 config.go:182] Loaded profile config "pause-927729": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0223 05:05:53.132917 33397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0223 05:05:53.132947 33397 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 05:05:53.135383 33397 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-927729" context rescaled to 1 replicas
I0223 05:05:53.135409 33397 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.54 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0223 05:05:53.137318 33397 out.go:177] * Verifying Kubernetes components...
I0223 05:05:53.138747 33397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0223 05:05:53.149478 33397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43491
I0223 05:05:53.149842 33397 main.go:141] libmachine: () Calling .GetVersion
I0223 05:05:53.150347 33397 main.go:141] libmachine: Using API Version 1
I0223 05:05:53.150369 33397 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 05:05:53.150686 33397 main.go:141] libmachine: () Calling .GetMachineName
I0223 05:05:53.150871 33397 main.go:141] libmachine: (pause-927729) Calling .GetState
I0223 05:05:53.152453 33397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0223 05:05:53.152478 33397 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 05:05:53.166115 33397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41183
I0223 05:05:53.166452 33397 main.go:141] libmachine: () Calling .GetVersion
I0223 05:05:53.166892 33397 main.go:141] libmachine: Using API Version 1
I0223 05:05:53.166912 33397 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 05:05:53.167196 33397 main.go:141] libmachine: () Calling .GetMachineName
I0223 05:05:53.167355 33397 main.go:141] libmachine: (pause-927729) Calling .DriverName
I0223 05:05:53.167521 33397 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0223 05:05:53.167540 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHHostname
I0223 05:05:53.170343 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:05:53.170748 33397 main.go:141] libmachine: (pause-927729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:c7:cd", ip: ""} in network mk-pause-927729: {Iface:virbr2 ExpiryTime:2023-02-23 06:03:34 +0000 UTC Type:0 Mac:52:54:00:33:c7:cd Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-927729 Clientid:01:52:54:00:33:c7:cd}
I0223 05:05:53.170773 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined IP address 192.168.50.54 and MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:05:53.170914 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHPort
I0223 05:05:53.171059 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHKeyPath
I0223 05:05:53.171196 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHUsername
I0223 05:05:53.171298 33397 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-3952/.minikube/machines/pause-927729/id_rsa Username:docker}
I0223 05:05:53.254613 33397 node_ready.go:35] waiting up to 6m0s for node "pause-927729" to be "Ready" ...
I0223 05:05:53.255025 33397 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0223 05:05:53.257660 33397 node_ready.go:49] node "pause-927729" has status "Ready":"True"
I0223 05:05:53.257679 33397 node_ready.go:38] duration metric: took 3.027096ms waiting for node "pause-927729" to be "Ready" ...
I0223 05:05:53.257689 33397 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0223 05:05:53.262482 33397 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-cglqd" in "kube-system" namespace to be "Ready" ...
I0223 05:05:53.267534 33397 pod_ready.go:92] pod "coredns-787d4945fb-cglqd" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:53.267550 33397 pod_ready.go:81] duration metric: took 5.047301ms waiting for pod "coredns-787d4945fb-cglqd" in "kube-system" namespace to be "Ready" ...
I0223 05:05:53.267561 33397 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:53.307130 33397 docker.go:630] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0223 05:05:53.307151 33397 cache_images.go:84] Images are preloaded, skipping loading
I0223 05:05:53.307160 33397 cache_images.go:262] succeeded pushing to: pause-927729
I0223 05:05:53.307164 33397 cache_images.go:263] failed pushing to:
I0223 05:05:53.307190 33397 main.go:141] libmachine: Making call to close driver server
I0223 05:05:53.307210 33397 main.go:141] libmachine: (pause-927729) Calling .Close
I0223 05:05:53.307480 33397 main.go:141] libmachine: Successfully made call to close driver server
I0223 05:05:53.307503 33397 main.go:141] libmachine: Making call to close connection to plugin binary
I0223 05:05:53.307509 33397 main.go:141] libmachine: (pause-927729) DBG | Closing plugin on server side
I0223 05:05:53.307517 33397 main.go:141] libmachine: Making call to close driver server
I0223 05:05:53.307528 33397 main.go:141] libmachine: (pause-927729) Calling .Close
I0223 05:05:53.307799 33397 main.go:141] libmachine: (pause-927729) DBG | Closing plugin on server side
I0223 05:05:53.307840 33397 main.go:141] libmachine: Successfully made call to close driver server
I0223 05:05:53.307850 33397 main.go:141] libmachine: Making call to close connection to plugin binary
I0223 05:05:55.278402 33397 pod_ready.go:102] pod "etcd-pause-927729" in "kube-system" namespace has status "Ready":"False"
I0223 05:05:53.199515 34009 main.go:141] libmachine: (kindnet-993481) DBG | domain kindnet-993481 has defined MAC address 52:54:00:58:13:f8 in network mk-kindnet-993481
I0223 05:05:53.199962 34009 main.go:141] libmachine: (kindnet-993481) DBG | unable to find current IP address of domain kindnet-993481 in network mk-kindnet-993481
I0223 05:05:53.199989 34009 main.go:141] libmachine: (kindnet-993481) DBG | I0223 05:05:53.199906 34211 retry.go:31] will retry after 4.617549218s: waiting for machine to come up
I0223 05:05:55.778729 33397 pod_ready.go:92] pod "etcd-pause-927729" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:55.778753 33397 pod_ready.go:81] duration metric: took 2.511184901s waiting for pod "etcd-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:55.778764 33397 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:55.783678 33397 pod_ready.go:92] pod "kube-apiserver-pause-927729" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:55.783698 33397 pod_ready.go:81] duration metric: took 4.924356ms waiting for pod "kube-apiserver-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:55.783705 33397 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:55.920175 33397 pod_ready.go:92] pod "kube-controller-manager-pause-927729" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:55.920200 33397 pod_ready.go:81] duration metric: took 136.488237ms waiting for pod "kube-controller-manager-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:55.920212 33397 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bxfpq" in "kube-system" namespace to be "Ready" ...
I0223 05:05:56.319947 33397 pod_ready.go:92] pod "kube-proxy-bxfpq" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:56.319965 33397 pod_ready.go:81] duration metric: took 399.745807ms waiting for pod "kube-proxy-bxfpq" in "kube-system" namespace to be "Ready" ...
I0223 05:05:56.319974 33397 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:56.720085 33397 pod_ready.go:92] pod "kube-scheduler-pause-927729" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:56.720110 33397 pod_ready.go:81] duration metric: took 400.129893ms waiting for pod "kube-scheduler-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:56.720120 33397 pod_ready.go:38] duration metric: took 3.462416553s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0223 05:05:56.720142 33397 api_server.go:51] waiting for apiserver process to appear ...
I0223 05:05:56.720187 33397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 05:05:56.737268 33397 api_server.go:71] duration metric: took 3.601831215s to wait for apiserver process to appear ...
I0223 05:05:56.737294 33397 api_server.go:87] waiting for apiserver healthz status ...
I0223 05:05:56.737305 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:05:56.744668 33397 api_server.go:278] https://192.168.50.54:8443/healthz returned 200:
ok
I0223 05:05:56.746405 33397 api_server.go:140] control plane version: v1.26.1
I0223 05:05:56.746425 33397 api_server.go:130] duration metric: took 9.125813ms to wait for apiserver health ...
I0223 05:05:56.746435 33397 system_pods.go:43] waiting for kube-system pods to appear ...
I0223 05:05:56.923345 33397 system_pods.go:59] 6 kube-system pods found
I0223 05:05:56.923371 33397 system_pods.go:61] "coredns-787d4945fb-cglqd" [1fa88fe5-60f1-431a-86be-e51eef3d0ad2] Running
I0223 05:05:56.923377 33397 system_pods.go:61] "etcd-pause-927729" [8eeb6d86-cfd5-4044-bbea-7cda3b2805c7] Running
I0223 05:05:56.923381 33397 system_pods.go:61] "kube-apiserver-pause-927729" [0659a506-2258-4f0c-a614-1ad5e31b6dd0] Running
I0223 05:05:56.923385 33397 system_pods.go:61] "kube-controller-manager-pause-927729" [8666d42f-2670-4612-b31e-1a090f2b49f3] Running
I0223 05:05:56.923389 33397 system_pods.go:61] "kube-proxy-bxfpq" [e88a33f8-6ea2-4841-8c3f-da34239da2ff] Running
I0223 05:05:56.923393 33397 system_pods.go:61] "kube-scheduler-pause-927729" [72fb1f70-51ea-4744-bae5-73655dd83967] Running
I0223 05:05:56.923398 33397 system_pods.go:74] duration metric: took 176.958568ms to wait for pod list to return data ...
I0223 05:05:56.923405 33397 default_sa.go:34] waiting for default service account to be created ...
I0223 05:05:57.119680 33397 default_sa.go:45] found service account: "default"
I0223 05:05:57.119703 33397 default_sa.go:55] duration metric: took 196.290064ms for default service account to be created ...
I0223 05:05:57.119712 33397 system_pods.go:116] waiting for k8s-apps to be running ...
I0223 05:05:57.322536 33397 system_pods.go:86] 6 kube-system pods found
I0223 05:05:57.322560 33397 system_pods.go:89] "coredns-787d4945fb-cglqd" [1fa88fe5-60f1-431a-86be-e51eef3d0ad2] Running
I0223 05:05:57.322565 33397 system_pods.go:89] "etcd-pause-927729" [8eeb6d86-cfd5-4044-bbea-7cda3b2805c7] Running
I0223 05:05:57.322569 33397 system_pods.go:89] "kube-apiserver-pause-927729" [0659a506-2258-4f0c-a614-1ad5e31b6dd0] Running
I0223 05:05:57.322573 33397 system_pods.go:89] "kube-controller-manager-pause-927729" [8666d42f-2670-4612-b31e-1a090f2b49f3] Running
I0223 05:05:57.322578 33397 system_pods.go:89] "kube-proxy-bxfpq" [e88a33f8-6ea2-4841-8c3f-da34239da2ff] Running
I0223 05:05:57.322582 33397 system_pods.go:89] "kube-scheduler-pause-927729" [72fb1f70-51ea-4744-bae5-73655dd83967] Running
I0223 05:05:57.322588 33397 system_pods.go:126] duration metric: took 202.871272ms to wait for k8s-apps to be running ...
I0223 05:05:57.322594 33397 system_svc.go:44] waiting for kubelet service to be running ....
I0223 05:05:57.322631 33397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0223 05:05:57.339233 33397 system_svc.go:56] duration metric: took 16.630061ms WaitForService to wait for kubelet.
I0223 05:05:57.339259 33397 kubeadm.go:578] duration metric: took 4.203826911s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0223 05:05:57.339282 33397 node_conditions.go:102] verifying NodePressure condition ...
I0223 05:05:57.520481 33397 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0223 05:05:57.520502 33397 node_conditions.go:123] node cpu capacity is 2
I0223 05:05:57.520512 33397 node_conditions.go:105] duration metric: took 181.223805ms to run NodePressure ...
I0223 05:05:57.520521 33397 start.go:228] waiting for startup goroutines ...
I0223 05:05:57.520527 33397 start.go:233] waiting for cluster config update ...
I0223 05:05:57.520534 33397 start.go:242] writing updated cluster config ...
I0223 05:05:57.520820 33397 ssh_runner.go:195] Run: rm -f paused
I0223 05:05:57.572198 33397 start.go:555] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
I0223 05:05:57.574315 33397 out.go:177] * Done! kubectl is now configured to use "pause-927729" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Journal begins at Thu 2023-02-23 05:03:31 UTC, ends at Thu 2023-02-23 05:05:58 UTC. --
Feb 23 05:05:34 pause-927729 dockerd[4634]: time="2023-02-23T05:05:34.954771696Z" level=info msg="ignoring event" container=33018ba60b473e190348ad0328a8273c998744966000205fadc023b43956693d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 23 05:05:34 pause-927729 dockerd[4640]: time="2023-02-23T05:05:34.957602475Z" level=info msg="shim disconnected" id=33018ba60b473e190348ad0328a8273c998744966000205fadc023b43956693d
Feb 23 05:05:34 pause-927729 dockerd[4640]: time="2023-02-23T05:05:34.957887139Z" level=warning msg="cleaning up after shim disconnected" id=33018ba60b473e190348ad0328a8273c998744966000205fadc023b43956693d namespace=moby
Feb 23 05:05:34 pause-927729 dockerd[4640]: time="2023-02-23T05:05:34.958028131Z" level=info msg="cleaning up dead shim"
Feb 23 05:05:34 pause-927729 dockerd[4640]: time="2023-02-23T05:05:34.973630400Z" level=warning msg="cleanup warnings time=\"2023-02-23T05:05:34Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6997 runtime=io.containerd.runc.v2\n"
Feb 23 05:05:37 pause-927729 dockerd[4640]: time="2023-02-23T05:05:37.505265034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 23 05:05:37 pause-927729 dockerd[4640]: time="2023-02-23T05:05:37.505672887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 23 05:05:37 pause-927729 dockerd[4640]: time="2023-02-23T05:05:37.505688154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 23 05:05:37 pause-927729 dockerd[4640]: time="2023-02-23T05:05:37.506711952Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/67052abc75baa2b6d06b8426f25ed9c733953b314fa336073605b2fca79c327a pid=7248 runtime=io.containerd.runc.v2
Feb 23 05:05:37 pause-927729 dockerd[4640]: time="2023-02-23T05:05:37.531424124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 23 05:05:37 pause-927729 dockerd[4640]: time="2023-02-23T05:05:37.531547968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 23 05:05:37 pause-927729 dockerd[4640]: time="2023-02-23T05:05:37.531564985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 23 05:05:37 pause-927729 dockerd[4640]: time="2023-02-23T05:05:37.531962274Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f1219eafb40d327312248676dd4191409b0daa7c50d561878b2ce7fa5a9df8c8 pid=7270 runtime=io.containerd.runc.v2
Feb 23 05:05:43 pause-927729 dockerd[4640]: time="2023-02-23T05:05:43.108977194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 23 05:05:43 pause-927729 dockerd[4640]: time="2023-02-23T05:05:43.109506575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 23 05:05:43 pause-927729 dockerd[4640]: time="2023-02-23T05:05:43.110034523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 23 05:05:43 pause-927729 dockerd[4640]: time="2023-02-23T05:05:43.119197199Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/be8ca09cb627a998bb3e931cf43c35f3a5ec94ad415e951851fcb8a1cf248aa7 pid=7433 runtime=io.containerd.runc.v2
Feb 23 05:05:43 pause-927729 dockerd[4640]: time="2023-02-23T05:05:43.481795486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 23 05:05:43 pause-927729 dockerd[4640]: time="2023-02-23T05:05:43.486274467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 23 05:05:43 pause-927729 dockerd[4640]: time="2023-02-23T05:05:43.486297328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 23 05:05:43 pause-927729 dockerd[4640]: time="2023-02-23T05:05:43.486601077Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/0665b7e62cd32d10fe5978db9ba2820375362e4d2588dbe4429175d881f65e65 pid=7494 runtime=io.containerd.runc.v2
Feb 23 05:05:44 pause-927729 dockerd[4640]: time="2023-02-23T05:05:44.210534837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 23 05:05:44 pause-927729 dockerd[4640]: time="2023-02-23T05:05:44.211065295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 23 05:05:44 pause-927729 dockerd[4640]: time="2023-02-23T05:05:44.211321375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 23 05:05:44 pause-927729 dockerd[4640]: time="2023-02-23T05:05:44.211862036Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/cea3c27a3a1fb90077cc1d176f586da18fe5d8552149e3d92bee8ffe8a99009f pid=7646 runtime=io.containerd.runc.v2
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
cea3c27a3a1fb 5185b96f0becf 14 seconds ago Running coredns 2 be8ca09cb627a
0665b7e62cd32 46a6bb3c77ce0 15 seconds ago Running kube-proxy 3 e241360610a43
f1219eafb40d3 655493523f607 21 seconds ago Running kube-scheduler 3 13a7cf699ccb2
67052abc75baa fce326961ae2d 21 seconds ago Running etcd 3 63fd8f1d363bb
32c0fdace4cd4 e9c08e11b07f6 26 seconds ago Running kube-controller-manager 2 a34774c9bd4fd
4ad65d61e105d deb04688c4a35 27 seconds ago Running kube-apiserver 2 f3859b77311b0
fe41c99a30b34 fce326961ae2d 43 seconds ago Exited etcd 2 148e1d336144f
12af3126a2cbb 46a6bb3c77ce0 46 seconds ago Exited kube-proxy 2 f863d7090eaa5
7e5efcad3eed3 655493523f607 55 seconds ago Exited kube-scheduler 2 bb6c25f55c667
33018ba60b473 5185b96f0becf 59 seconds ago Exited coredns 1 522be95290b4a
c8fa27ad88bdb deb04688c4a35 About a minute ago Exited kube-apiserver 1 bc49644ebb60f
6817b46b014b0 e9c08e11b07f6 About a minute ago Exited kube-controller-manager 1 4991904c98c51
*
* ==> coredns [33018ba60b47] <==
* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] 127.0.0.1:51103 - 40969 "HINFO IN 1009213339599387188.5718121639688052542. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030904561s
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:52204->10.96.0.1:443: read: connection reset by peer
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> coredns [cea3c27a3a1f] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] 127.0.0.1:50895 - 335 "HINFO IN 2992301098082177045.4030291826637785790. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.066325133s
*
* ==> describe nodes <==
* Name: pause-927729
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-927729
kubernetes.io/os=linux
minikube.k8s.io/commit=66d56dc3ac28a702789778ac47e90f12526a0321
minikube.k8s.io/name=pause-927729
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_02_23T05_04_12_0700
minikube.k8s.io/version=v1.29.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 23 Feb 2023 05:04:08 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-927729
AcquireTime: <unset>
RenewTime: Thu, 23 Feb 2023 05:05:51 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 23 Feb 2023 05:05:41 +0000 Thu, 23 Feb 2023 05:04:06 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 23 Feb 2023 05:05:41 +0000 Thu, 23 Feb 2023 05:04:06 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 23 Feb 2023 05:05:41 +0000 Thu, 23 Feb 2023 05:04:06 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 23 Feb 2023 05:05:41 +0000 Thu, 23 Feb 2023 05:04:15 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.50.54
Hostname: pause-927729
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017420Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017420Ki
pods: 110
System Info:
Machine ID: 0c8ab0825265425d8f78ffb269f0d4f2
System UUID: 0c8ab082-5265-425d-8f78-ffb269f0d4f2
Boot ID: b462c4ff-59d8-4639-a38e-5243334f9339
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.23
Kubelet Version: v1.26.1
Kube-Proxy Version: v1.26.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-787d4945fb-cglqd 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 94s
kube-system etcd-pause-927729 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 105s
kube-system kube-apiserver-pause-927729 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 105s
kube-system kube-controller-manager-pause-927729 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 105s
kube-system kube-proxy-bxfpq 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 94s
kube-system kube-scheduler-pause-927729 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 105s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 91s kube-proxy
Normal Starting 14s kube-proxy
Normal Starting 118s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 118s (x4 over 118s) kubelet Node pause-927729 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 118s (x4 over 118s) kubelet Node pause-927729 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 118s (x3 over 118s) kubelet Node pause-927729 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 118s kubelet Updated Node Allocatable limit across pods
Normal Starting 106s kubelet Starting kubelet.
Normal NodeHasSufficientPID 105s kubelet Node pause-927729 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 105s kubelet Node pause-927729 status is now: NodeHasNoDiskPressure
Normal NodeAllocatableEnforced 105s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 105s kubelet Node pause-927729 status is now: NodeHasSufficientMemory
Normal NodeReady 103s kubelet Node pause-927729 status is now: NodeReady
Normal RegisteredNode 95s node-controller Node pause-927729 event: Registered Node pause-927729 in Controller
Normal Starting 22s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 22s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 21s (x8 over 22s) kubelet Node pause-927729 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 21s (x8 over 22s) kubelet Node pause-927729 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 21s (x7 over 22s) kubelet Node pause-927729 status is now: NodeHasSufficientPID
Normal RegisteredNode 4s node-controller Node pause-927729 event: Registered Node pause-927729 in Controller
*
* ==> dmesg <==
* [ +1.968178] kauditd_printk_skb: 14 callbacks suppressed
[ +0.350592] systemd-fstab-generator[896]: Ignoring "noauto" for root device
[ +0.691114] systemd-fstab-generator[933]: Ignoring "noauto" for root device
[ +0.096727] systemd-fstab-generator[944]: Ignoring "noauto" for root device
[ +0.115312] systemd-fstab-generator[957]: Ignoring "noauto" for root device
[ +1.495616] systemd-fstab-generator[1105]: Ignoring "noauto" for root device
[ +0.106416] systemd-fstab-generator[1116]: Ignoring "noauto" for root device
[ +0.123044] systemd-fstab-generator[1127]: Ignoring "noauto" for root device
[ +0.107960] systemd-fstab-generator[1138]: Ignoring "noauto" for root device
[ +4.338395] systemd-fstab-generator[1390]: Ignoring "noauto" for root device
[Feb23 05:04] kauditd_printk_skb: 68 callbacks suppressed
[ +11.840835] systemd-fstab-generator[2307]: Ignoring "noauto" for root device
[ +14.285510] kauditd_printk_skb: 8 callbacks suppressed
[ +11.120445] kauditd_printk_skb: 26 callbacks suppressed
[ +7.294560] systemd-fstab-generator[3852]: Ignoring "noauto" for root device
[ +0.378753] systemd-fstab-generator[3883]: Ignoring "noauto" for root device
[ +0.181704] systemd-fstab-generator[3894]: Ignoring "noauto" for root device
[ +0.225128] systemd-fstab-generator[3907]: Ignoring "noauto" for root device
[ +8.362713] systemd-fstab-generator[5007]: Ignoring "noauto" for root device
[ +0.271036] systemd-fstab-generator[5025]: Ignoring "noauto" for root device
[ +0.248756] systemd-fstab-generator[5076]: Ignoring "noauto" for root device
[ +0.251288] systemd-fstab-generator[5123]: Ignoring "noauto" for root device
[ +2.178342] kauditd_printk_skb: 34 callbacks suppressed
[Feb23 05:05] kauditd_printk_skb: 3 callbacks suppressed
[ +23.233390] systemd-fstab-generator[7087]: Ignoring "noauto" for root device
*
* ==> etcd [67052abc75ba] <==
* {"level":"info","ts":"2023-02-23T05:05:38.102Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b7dc4198fc8444d0","local-member-id":"b0a6bbe4c9ddfbc1","cluster-version":"3.5"}
{"level":"info","ts":"2023-02-23T05:05:38.102Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-02-23T05:05:38.104Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.50.54:2380"}
{"level":"info","ts":"2023-02-23T05:05:38.104Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.50.54:2380"}
{"level":"info","ts":"2023-02-23T05:05:39.675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 is starting a new election at term 4"}
{"level":"info","ts":"2023-02-23T05:05:39.675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became pre-candidate at term 4"}
{"level":"info","ts":"2023-02-23T05:05:39.675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 received MsgPreVoteResp from b0a6bbe4c9ddfbc1 at term 4"}
{"level":"info","ts":"2023-02-23T05:05:39.675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became candidate at term 5"}
{"level":"info","ts":"2023-02-23T05:05:39.675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 received MsgVoteResp from b0a6bbe4c9ddfbc1 at term 5"}
{"level":"info","ts":"2023-02-23T05:05:39.675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became leader at term 5"}
{"level":"info","ts":"2023-02-23T05:05:39.675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b0a6bbe4c9ddfbc1 elected leader b0a6bbe4c9ddfbc1 at term 5"}
{"level":"info","ts":"2023-02-23T05:05:39.678Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-02-23T05:05:39.678Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b0a6bbe4c9ddfbc1","local-member-attributes":"{Name:pause-927729 ClientURLs:[https://192.168.50.54:2379]}","request-path":"/0/members/b0a6bbe4c9ddfbc1/attributes","cluster-id":"b7dc4198fc8444d0","publish-timeout":"7s"}
{"level":"info","ts":"2023-02-23T05:05:39.679Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-02-23T05:05:39.680Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.50.54:2379"}
{"level":"info","ts":"2023-02-23T05:05:39.681Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-02-23T05:05:39.682Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-02-23T05:05:39.682Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-02-23T05:05:45.596Z","caller":"traceutil/trace.go:171","msg":"trace[2124695069] linearizableReadLoop","detail":"{readStateIndex:475; appliedIndex:474; }","duration":"231.544809ms","start":"2023-02-23T05:05:45.364Z","end":"2023-02-23T05:05:45.596Z","steps":["trace[2124695069] 'read index received' (duration: 230.678871ms)","trace[2124695069] 'applied index is now lower than readState.Index' (duration: 864.72µs)"],"step_count":2}
{"level":"info","ts":"2023-02-23T05:05:45.596Z","caller":"traceutil/trace.go:171","msg":"trace[1413819236] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"334.377116ms","start":"2023-02-23T05:05:45.262Z","end":"2023-02-23T05:05:45.596Z","steps":["trace[1413819236] 'process raft request' (duration: 333.088717ms)"],"step_count":1}
{"level":"warn","ts":"2023-02-23T05:05:45.597Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-23T05:05:45.262Z","time spent":"335.149192ms","remote":"127.0.0.1:39256","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":676,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-scheduler-pause-927729.17465bb370527039\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-scheduler-pause-927729.17465bb370527039\" value_size:585 lease:8917556607150529160 >> failure:<>"}
{"level":"warn","ts":"2023-02-23T05:05:45.596Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"232.268878ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-bxfpq\" ","response":"range_response_count:1 size:4712"}
{"level":"info","ts":"2023-02-23T05:05:45.598Z","caller":"traceutil/trace.go:171","msg":"trace[335847640] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-bxfpq; range_end:; response_count:1; response_revision:437; }","duration":"233.839425ms","start":"2023-02-23T05:05:45.364Z","end":"2023-02-23T05:05:45.598Z","steps":["trace[335847640] 'agreement among raft nodes before linearized reading' (duration: 232.142656ms)"],"step_count":1}
{"level":"info","ts":"2023-02-23T05:05:45.894Z","caller":"traceutil/trace.go:171","msg":"trace[454170152] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"285.179489ms","start":"2023-02-23T05:05:45.609Z","end":"2023-02-23T05:05:45.894Z","steps":["trace[454170152] 'process raft request' (duration: 284.94493ms)"],"step_count":1}
{"level":"info","ts":"2023-02-23T05:05:45.894Z","caller":"traceutil/trace.go:171","msg":"trace[1506822538] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"287.32678ms","start":"2023-02-23T05:05:45.607Z","end":"2023-02-23T05:05:45.894Z","steps":["trace[1506822538] 'process raft request' (duration: 198.332979ms)","trace[1506822538] 'compare' (duration: 87.7327ms)"],"step_count":2}
*
* ==> etcd [fe41c99a30b3] <==
* {"level":"info","ts":"2023-02-23T05:05:15.663Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-02-23T05:05:15.663Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b0a6bbe4c9ddfbc1","initial-advertise-peer-urls":["https://192.168.50.54:2380"],"listen-peer-urls":["https://192.168.50.54:2380"],"advertise-client-urls":["https://192.168.50.54:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.54:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-02-23T05:05:15.663Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-02-23T05:05:15.664Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.50.54:2380"}
{"level":"info","ts":"2023-02-23T05:05:15.664Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.50.54:2380"}
{"level":"info","ts":"2023-02-23T05:05:17.045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 is starting a new election at term 3"}
{"level":"info","ts":"2023-02-23T05:05:17.046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became pre-candidate at term 3"}
{"level":"info","ts":"2023-02-23T05:05:17.046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 received MsgPreVoteResp from b0a6bbe4c9ddfbc1 at term 3"}
{"level":"info","ts":"2023-02-23T05:05:17.046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became candidate at term 4"}
{"level":"info","ts":"2023-02-23T05:05:17.046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 received MsgVoteResp from b0a6bbe4c9ddfbc1 at term 4"}
{"level":"info","ts":"2023-02-23T05:05:17.046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became leader at term 4"}
{"level":"info","ts":"2023-02-23T05:05:17.046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b0a6bbe4c9ddfbc1 elected leader b0a6bbe4c9ddfbc1 at term 4"}
{"level":"info","ts":"2023-02-23T05:05:17.052Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b0a6bbe4c9ddfbc1","local-member-attributes":"{Name:pause-927729 ClientURLs:[https://192.168.50.54:2379]}","request-path":"/0/members/b0a6bbe4c9ddfbc1/attributes","cluster-id":"b7dc4198fc8444d0","publish-timeout":"7s"}
{"level":"info","ts":"2023-02-23T05:05:17.052Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-02-23T05:05:17.052Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-02-23T05:05:17.054Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-02-23T05:05:17.054Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-02-23T05:05:17.054Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-02-23T05:05:17.054Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.50.54:2379"}
{"level":"info","ts":"2023-02-23T05:05:29.909Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2023-02-23T05:05:29.909Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-927729","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.54:2380"],"advertise-client-urls":["https://192.168.50.54:2379"]}
{"level":"info","ts":"2023-02-23T05:05:29.912Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b0a6bbe4c9ddfbc1","current-leader-member-id":"b0a6bbe4c9ddfbc1"}
{"level":"info","ts":"2023-02-23T05:05:29.916Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.50.54:2380"}
{"level":"info","ts":"2023-02-23T05:05:29.917Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.50.54:2380"}
{"level":"info","ts":"2023-02-23T05:05:29.917Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-927729","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.54:2380"],"advertise-client-urls":["https://192.168.50.54:2379"]}
*
* ==> kernel <==
* 05:05:58 up 2 min, 0 users, load average: 3.01, 1.36, 0.52
Linux pause-927729 5.10.57 #1 SMP Thu Feb 16 22:09:52 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [4ad65d61e105] <==
* I0223 05:05:41.551937 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0223 05:05:41.551992 1 crd_finalizer.go:266] Starting CRDFinalizer
I0223 05:05:41.452237 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0223 05:05:41.552256 1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
I0223 05:05:41.553259 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0223 05:05:41.553665 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0223 05:05:41.640727 1 shared_informer.go:280] Caches are synced for node_authorizer
I0223 05:05:41.646379 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0223 05:05:41.647196 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0223 05:05:41.652306 1 shared_informer.go:280] Caches are synced for crd-autoregister
I0223 05:05:41.652651 1 cache.go:39] Caches are synced for autoregister controller
I0223 05:05:41.657243 1 apf_controller.go:366] Running API Priority and Fairness config worker
I0223 05:05:41.657376 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
I0223 05:05:41.657479 1 shared_informer.go:280] Caches are synced for configmaps
I0223 05:05:41.665620 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
I0223 05:05:41.673057 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0223 05:05:42.179182 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0223 05:05:42.463276 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0223 05:05:43.343033 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0223 05:05:43.374901 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0223 05:05:43.442041 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0223 05:05:43.491772 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0223 05:05:43.502480 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0223 05:05:54.645861 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0223 05:05:54.840623 1 controller.go:615] quota admission added evaluator for: endpoints
*
* ==> kube-apiserver [c8fa27ad88bd] <==
* W0223 05:05:09.074259 1 logging.go:59] [core] [Channel #4 SubChannel #5] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0223 05:05:09.320226 1 logging.go:59] [core] [Channel #3 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0223 05:05:13.976866 1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
E0223 05:05:18.851261 1 run.go:74] "command failed" err="context deadline exceeded"
*
* ==> kube-controller-manager [32c0fdace4cd] <==
* I0223 05:05:54.639310 1 shared_informer.go:280] Caches are synced for deployment
I0223 05:05:54.640606 1 shared_informer.go:280] Caches are synced for HPA
I0223 05:05:54.641297 1 shared_informer.go:280] Caches are synced for persistent volume
I0223 05:05:54.641408 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kubelet-serving
I0223 05:05:54.641476 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kubelet-client
I0223 05:05:54.641491 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0223 05:05:54.641517 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-legacy-unknown
I0223 05:05:54.642618 1 shared_informer.go:280] Caches are synced for ClusterRoleAggregator
I0223 05:05:54.646064 1 shared_informer.go:280] Caches are synced for endpoint
I0223 05:05:54.676555 1 shared_informer.go:280] Caches are synced for ReplicaSet
I0223 05:05:54.730461 1 shared_informer.go:280] Caches are synced for taint
I0223 05:05:54.730870 1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone:
W0223 05:05:54.731197 1 node_lifecycle_controller.go:1053] Missing timestamp for Node pause-927729. Assuming now as a timestamp.
I0223 05:05:54.731350 1 event.go:294] "Event occurred" object="pause-927729" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-927729 event: Registered Node pause-927729 in Controller"
I0223 05:05:54.730935 1 taint_manager.go:206] "Starting NoExecuteTaintManager"
I0223 05:05:54.731414 1 taint_manager.go:211] "Sending events to api server"
I0223 05:05:54.731501 1 node_lifecycle_controller.go:1254] Controller detected that zone is now in state Normal.
I0223 05:05:54.741021 1 shared_informer.go:280] Caches are synced for daemon sets
I0223 05:05:54.745822 1 shared_informer.go:280] Caches are synced for resource quota
I0223 05:05:54.768159 1 shared_informer.go:280] Caches are synced for service account
I0223 05:05:54.772630 1 shared_informer.go:280] Caches are synced for resource quota
I0223 05:05:54.817938 1 shared_informer.go:280] Caches are synced for namespace
I0223 05:05:55.187820 1 shared_informer.go:280] Caches are synced for garbage collector
I0223 05:05:55.189587 1 shared_informer.go:280] Caches are synced for garbage collector
I0223 05:05:55.189607 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-controller-manager [6817b46b014b] <==
* I0223 05:04:58.701655 1 serving.go:348] Generated self-signed cert in-memory
I0223 05:04:59.247798 1 controllermanager.go:182] Version: v1.26.1
I0223 05:04:59.247932 1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0223 05:04:59.249410 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0223 05:04:59.249715 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0223 05:04:59.250362 1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0223 05:04:59.250537 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
F0223 05:05:19.859873 1 controllermanager.go:228] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get "https://192.168.50.54:8443/healthz": dial tcp 192.168.50.54:8443: connect: connection refused
*
* ==> kube-proxy [0665b7e62cd3] <==
* I0223 05:05:43.774665 1 node.go:163] Successfully retrieved node IP: 192.168.50.54
I0223 05:05:43.774787 1 server_others.go:109] "Detected node IP" address="192.168.50.54"
I0223 05:05:43.774843 1 server_others.go:535] "Using iptables proxy"
I0223 05:05:43.834360 1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0223 05:05:43.834658 1 server_others.go:176] "Using iptables Proxier"
I0223 05:05:43.834993 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0223 05:05:43.837666 1 server.go:655] "Version info" version="v1.26.1"
I0223 05:05:43.837931 1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0223 05:05:43.839389 1 config.go:317] "Starting service config controller"
I0223 05:05:43.842151 1 shared_informer.go:273] Waiting for caches to sync for service config
I0223 05:05:43.842587 1 config.go:444] "Starting node config controller"
I0223 05:05:43.842802 1 shared_informer.go:273] Waiting for caches to sync for node config
I0223 05:05:43.842334 1 config.go:226] "Starting endpoint slice config controller"
I0223 05:05:43.843389 1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
I0223 05:05:43.943020 1 shared_informer.go:280] Caches are synced for service config
I0223 05:05:43.943162 1 shared_informer.go:280] Caches are synced for node config
I0223 05:05:43.944065 1 shared_informer.go:280] Caches are synced for endpoint slice config
*
* ==> kube-proxy [12af3126a2cb] <==
* E0223 05:05:19.864338 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-927729": dial tcp 192.168.50.54:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.50.54:60060->192.168.50.54:8443: read: connection reset by peer
E0223 05:05:21.037339 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-927729": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:23.276901 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-927729": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:27.908067 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-927729": dial tcp 192.168.50.54:8443: connect: connection refused
*
* ==> kube-scheduler [7e5efcad3eed] <==
* W0223 05:05:28.006725 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.50.54:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:28.006908 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.50.54:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
W0223 05:05:28.051685 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.50.54:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:28.051727 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.50.54:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
W0223 05:05:28.162815 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.50.54:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:28.162922 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.50.54:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
W0223 05:05:28.184282 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.50.54:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:28.184492 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.50.54:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
W0223 05:05:28.190305 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.50.54:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:28.190362 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.50.54:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
W0223 05:05:28.479442 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.50.54:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:28.479679 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.50.54:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
W0223 05:05:28.675030 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.50.54:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:28.675357 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.50.54:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
W0223 05:05:29.113302 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.50.54:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:29.113374 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.50.54:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
W0223 05:05:29.377356 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.50.54:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:29.377460 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.50.54:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
W0223 05:05:29.614263 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.50.54:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:29.614361 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.50.54:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:29.874813 1 shared_informer.go:276] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0223 05:05:29.875020 1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0223 05:05:29.875041 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
I0223 05:05:29.875178 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0223 05:05:29.875619 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kube-scheduler [f1219eafb40d] <==
* I0223 05:05:38.432427 1 serving.go:348] Generated self-signed cert in-memory
W0223 05:05:41.536038 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0223 05:05:41.536208 1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0223 05:05:41.536305 1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
W0223 05:05:41.536318 1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0223 05:05:41.588649 1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
I0223 05:05:41.588923 1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0223 05:05:41.590425 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0223 05:05:41.590635 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0223 05:05:41.593177 1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0223 05:05:41.590721 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0223 05:05:41.694296 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Thu 2023-02-23 05:03:31 UTC, ends at Thu 2023-02-23 05:05:59 UTC. --
Feb 23 05:05:37 pause-927729 kubelet[7093]: I0223 05:05:37.112265 7093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b4537c914b45096bfec5d8188475986-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-927729\" (UID: \"1b4537c914b45096bfec5d8188475986\") " pod="kube-system/kube-controller-manager-pause-927729"
Feb 23 05:05:37 pause-927729 kubelet[7093]: I0223 05:05:37.112284 7093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/be6c04c3fb0b04f0c852689a8916af55-kubeconfig\") pod \"kube-scheduler-pause-927729\" (UID: \"be6c04c3fb0b04f0c852689a8916af55\") " pod="kube-system/kube-scheduler-pause-927729"
Feb 23 05:05:37 pause-927729 kubelet[7093]: I0223 05:05:37.358376 7093 scope.go:115] "RemoveContainer" containerID="fe41c99a30b348948eb347779ee709645cd627e4c497e0c26c603db96d371aed"
Feb 23 05:05:37 pause-927729 kubelet[7093]: I0223 05:05:37.405330 7093 scope.go:115] "RemoveContainer" containerID="7e5efcad3eed3918a5a13f1d264b9c19e4319eebbb86dae541c9c04851d859a3"
Feb 23 05:05:38 pause-927729 kubelet[7093]: I0223 05:05:38.181784 7093 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="522be95290b4a6985b0a8bbfea0310765aeff79bd30689bcf0ebb3155611fa79"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.660214 7093 kubelet_node_status.go:108] "Node was previously registered" node="pause-927729"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.660542 7093 kubelet_node_status.go:73] "Successfully registered node" node="pause-927729"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.663883 7093 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.665506 7093 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.671847 7093 apiserver.go:52] "Watching apiserver"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.765771 7093 topology_manager.go:210] "Topology Admit Handler"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.766070 7093 topology_manager.go:210] "Topology Admit Handler"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.807387 7093 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.844206 7093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrsjr\" (UniqueName: \"kubernetes.io/projected/1fa88fe5-60f1-431a-86be-e51eef3d0ad2-kube-api-access-jrsjr\") pod \"coredns-787d4945fb-cglqd\" (UID: \"1fa88fe5-60f1-431a-86be-e51eef3d0ad2\") " pod="kube-system/coredns-787d4945fb-cglqd"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.844386 7093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e88a33f8-6ea2-4841-8c3f-da34239da2ff-kube-proxy\") pod \"kube-proxy-bxfpq\" (UID: \"e88a33f8-6ea2-4841-8c3f-da34239da2ff\") " pod="kube-system/kube-proxy-bxfpq"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.844544 7093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e88a33f8-6ea2-4841-8c3f-da34239da2ff-lib-modules\") pod \"kube-proxy-bxfpq\" (UID: \"e88a33f8-6ea2-4841-8c3f-da34239da2ff\") " pod="kube-system/kube-proxy-bxfpq"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.844764 7093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7c8f\" (UniqueName: \"kubernetes.io/projected/e88a33f8-6ea2-4841-8c3f-da34239da2ff-kube-api-access-m7c8f\") pod \"kube-proxy-bxfpq\" (UID: \"e88a33f8-6ea2-4841-8c3f-da34239da2ff\") " pod="kube-system/kube-proxy-bxfpq"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.844833 7093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1fa88fe5-60f1-431a-86be-e51eef3d0ad2-config-volume\") pod \"coredns-787d4945fb-cglqd\" (UID: \"1fa88fe5-60f1-431a-86be-e51eef3d0ad2\") " pod="kube-system/coredns-787d4945fb-cglqd"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.844982 7093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e88a33f8-6ea2-4841-8c3f-da34239da2ff-xtables-lock\") pod \"kube-proxy-bxfpq\" (UID: \"e88a33f8-6ea2-4841-8c3f-da34239da2ff\") " pod="kube-system/kube-proxy-bxfpq"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.845066 7093 reconciler.go:41] "Reconciler: start to sync state"
Feb 23 05:05:42 pause-927729 kubelet[7093]: I0223 05:05:42.962817 7093 request.go:690] Waited for 1.014336598s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token
Feb 23 05:05:43 pause-927729 kubelet[7093]: I0223 05:05:43.267542 7093 scope.go:115] "RemoveContainer" containerID="12af3126a2cbbe02774da3479e9bc53e57505e80ee6c6605aa4f4e5ee3b48527"
Feb 23 05:05:44 pause-927729 kubelet[7093]: I0223 05:05:44.057639 7093 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be8ca09cb627a998bb3e931cf43c35f3a5ec94ad415e951851fcb8a1cf248aa7"
Feb 23 05:05:46 pause-927729 kubelet[7093]: I0223 05:05:46.125291 7093 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
Feb 23 05:05:49 pause-927729 kubelet[7093]: I0223 05:05:49.400349 7093 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-927729 -n pause-927729
helpers_test.go:261: (dbg) Run: kubectl --context pause-927729 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-927729 -n pause-927729
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p pause-927729 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-927729 logs -n 25: (1.161902324s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |---------|------------------------------------------------------|-------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|------------------------------------------------------|-------------|---------|---------|---------------------|---------------------|
| ssh | -p auto-993481 sudo find | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | /etc/cni -type f -exec sh -c | | | | | |
| | 'echo {}; cat {}' \; | | | | | |
| ssh | -p auto-993481 sudo ip a s | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| ssh | -p auto-993481 sudo ip r s | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| ssh | -p auto-993481 sudo | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | iptables-save | | | | | |
| ssh | -p auto-993481 sudo iptables | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | -t nat -L -n -v | | | | | |
| ssh | -p auto-993481 sudo systemctl | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | status kubelet --all --full | | | | | |
| | --no-pager | | | | | |
| ssh | -p auto-993481 sudo systemctl | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | cat kubelet --no-pager | | | | | |
| ssh | -p auto-993481 sudo journalctl | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | -xeu kubelet --all --full | | | | | |
| | --no-pager | | | | | |
| ssh | -p auto-993481 sudo cat | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | /etc/kubernetes/kubelet.conf | | | | | |
| ssh | -p auto-993481 sudo cat | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | /var/lib/kubelet/config.yaml | | | | | |
| ssh | -p auto-993481 sudo systemctl | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | status docker --all --full | | | | | |
| | --no-pager | | | | | |
| ssh | -p auto-993481 sudo systemctl | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | cat docker --no-pager | | | | | |
| ssh | -p auto-993481 sudo cat | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | /etc/docker/daemon.json | | | | | |
| ssh | -p auto-993481 sudo docker | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | system info | | | | | |
| ssh | -p auto-993481 sudo systemctl | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | |
| | status cri-docker --all --full | | | | | |
| | --no-pager | | | | | |
| ssh | -p auto-993481 sudo systemctl | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | cat cri-docker --no-pager | | | | | |
| ssh | -p auto-993481 sudo cat | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | /etc/systemd/system/cri-docker.service.d/10-cni.conf | | | | | |
| ssh | -p auto-993481 sudo cat | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | /usr/lib/systemd/system/cri-docker.service | | | | | |
| ssh | -p auto-993481 sudo | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | cri-dockerd --version | | | | | |
| ssh | -p auto-993481 sudo systemctl | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | |
| | status containerd --all --full | | | | | |
| | --no-pager | | | | | |
| ssh | -p auto-993481 sudo systemctl | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | cat containerd --no-pager | | | | | |
| ssh | -p auto-993481 sudo cat | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | /lib/systemd/system/containerd.service | | | | | |
| ssh | -p auto-993481 sudo cat | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | /etc/containerd/config.toml | | | | | |
| ssh | -p auto-993481 sudo containerd | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
| | config dump | | | | | |
| ssh | -p auto-993481 sudo systemctl | auto-993481 | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | |
| | status crio --all --full | | | | | |
| | --no-pager | | | | | |
|---------|------------------------------------------------------|-------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/02/23 05:05:45
Running on machine: ubuntu-20-agent-3
Binary: Built with gc go1.20.1 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0223 05:05:45.987795 34516 out.go:296] Setting OutFile to fd 1 ...
I0223 05:05:45.987929 34516 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0223 05:05:45.987934 34516 out.go:309] Setting ErrFile to fd 2...
I0223 05:05:45.987940 34516 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0223 05:05:45.988095 34516 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3952/.minikube/bin
I0223 05:05:45.988970 34516 out.go:303] Setting JSON to false
I0223 05:05:45.990459 34516 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2899,"bootTime":1677125847,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0223 05:05:45.990561 34516 start.go:135] virtualization: kvm guest
I0223 05:05:45.993030 34516 out.go:177] * [NoKubernetes-302103] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0223 05:05:45.994890 34516 out.go:177] - MINIKUBE_LOCATION=15909
I0223 05:05:45.994838 34516 notify.go:220] Checking for updates...
I0223 05:05:45.998354 34516 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0223 05:05:46.000795 34516 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15909-3952/kubeconfig
I0223 05:05:46.003159 34516 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3952/.minikube
I0223 05:05:46.004596 34516 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0223 05:05:46.005923 34516 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0223 05:05:46.007516 34516 config.go:182] Loaded profile config "NoKubernetes-302103": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
I0223 05:05:46.007962 34516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0223 05:05:46.008044 34516 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 05:05:46.024974 34516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43255
I0223 05:05:46.025461 34516 main.go:141] libmachine: () Calling .GetVersion
I0223 05:05:46.026264 34516 main.go:141] libmachine: Using API Version 1
I0223 05:05:46.026283 34516 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 05:05:46.026705 34516 main.go:141] libmachine: () Calling .GetMachineName
I0223 05:05:46.026912 34516 main.go:141] libmachine: (NoKubernetes-302103) Calling .DriverName
I0223 05:05:46.027074 34516 start.go:1652] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
I0223 05:05:46.027100 34516 driver.go:365] Setting default libvirt URI to qemu:///system
I0223 05:05:46.027479 34516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0223 05:05:46.027514 34516 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 05:05:46.046672 34516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36759
I0223 05:05:46.047307 34516 main.go:141] libmachine: () Calling .GetVersion
I0223 05:05:46.047934 34516 main.go:141] libmachine: Using API Version 1
I0223 05:05:46.047944 34516 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 05:05:46.048281 34516 main.go:141] libmachine: () Calling .GetMachineName
I0223 05:05:46.048497 34516 main.go:141] libmachine: (NoKubernetes-302103) Calling .DriverName
I0223 05:05:46.090677 34516 out.go:177] * Using the kvm2 driver based on existing profile
I0223 05:05:46.092357 34516 start.go:296] selected driver: kvm2
I0223 05:05:46.092366 34516 start.go:857] validating driver "kvm2" against &{Name:NoKubernetes-302103 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-302103 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.236 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0223 05:05:46.092476 34516 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0223 05:05:46.092836 34516 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0223 05:05:46.092917 34516 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15909-3952/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0223 05:05:46.109770 34516 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
I0223 05:05:46.110403 34516 cni.go:84] Creating CNI manager for ""
I0223 05:05:46.110416 34516 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0223 05:05:46.110421 34516 start_flags.go:319] config:
{Name:NoKubernetes-302103 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-302103 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.236 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0223 05:05:46.110527 34516 iso.go:125] acquiring lock: {Name:mkaa0353ce7f481d2e27b6d0b7fef8218290f843 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0223 05:05:46.112455 34516 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-302103
I0223 05:05:45.611581 33397 pod_ready.go:102] pod "kube-controller-manager-pause-927729" in "kube-system" namespace has status "Ready":"False"
I0223 05:05:47.595550 33397 pod_ready.go:92] pod "kube-controller-manager-pause-927729" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:47.595579 33397 pod_ready.go:81] duration metric: took 4.031042069s waiting for pod "kube-controller-manager-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:47.595591 33397 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bxfpq" in "kube-system" namespace to be "Ready" ...
I0223 05:05:47.602134 33397 pod_ready.go:92] pod "kube-proxy-bxfpq" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:47.602149 33397 pod_ready.go:81] duration metric: took 6.551675ms waiting for pod "kube-proxy-bxfpq" in "kube-system" namespace to be "Ready" ...
I0223 05:05:47.602156 33397 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:49.616137 33397 pod_ready.go:102] pod "kube-scheduler-pause-927729" in "kube-system" namespace has status "Ready":"False"
I0223 05:05:46.953923 34009 main.go:141] libmachine: (kindnet-993481) DBG | domain kindnet-993481 has defined MAC address 52:54:00:58:13:f8 in network mk-kindnet-993481
I0223 05:05:46.954405 34009 main.go:141] libmachine: (kindnet-993481) DBG | unable to find current IP address of domain kindnet-993481 in network mk-kindnet-993481
I0223 05:05:46.954445 34009 main.go:141] libmachine: (kindnet-993481) DBG | I0223 05:05:46.954368 34211 retry.go:31] will retry after 3.496329206s: waiting for machine to come up
I0223 05:05:50.452749 34009 main.go:141] libmachine: (kindnet-993481) DBG | domain kindnet-993481 has defined MAC address 52:54:00:58:13:f8 in network mk-kindnet-993481
I0223 05:05:50.453158 34009 main.go:141] libmachine: (kindnet-993481) DBG | unable to find current IP address of domain kindnet-993481 in network mk-kindnet-993481
I0223 05:05:50.453182 34009 main.go:141] libmachine: (kindnet-993481) DBG | I0223 05:05:50.453133 34211 retry.go:31] will retry after 2.74437136s: waiting for machine to come up
I0223 05:05:46.113983 34516 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime docker
W0223 05:05:46.144319 34516 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-docker-overlay2-amd64.tar.lz4 status code: 404
I0223 05:05:46.144478 34516 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3952/.minikube/profiles/NoKubernetes-302103/config.json ...
I0223 05:05:46.144694 34516 cache.go:193] Successfully downloaded all kic artifacts
I0223 05:05:46.144714 34516 start.go:364] acquiring machines lock for NoKubernetes-302103: {Name:mk80232e5ac6be7873ac7f01ae80ef9193e4980e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0223 05:05:52.115925 33397 pod_ready.go:102] pod "kube-scheduler-pause-927729" in "kube-system" namespace has status "Ready":"False"
I0223 05:05:53.115015 33397 pod_ready.go:92] pod "kube-scheduler-pause-927729" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:53.115044 33397 pod_ready.go:81] duration metric: took 5.512881096s waiting for pod "kube-scheduler-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:53.115053 33397 pod_ready.go:38] duration metric: took 9.593244507s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0223 05:05:53.115070 33397 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0223 05:05:53.127617 33397 ops.go:34] apiserver oom_adj: -16
I0223 05:05:53.127637 33397 kubeadm.go:637] restartCluster took 55.594270394s
I0223 05:05:53.127648 33397 kubeadm.go:403] StartCluster complete in 55.675868722s
I0223 05:05:53.127666 33397 settings.go:142] acquiring lock: {Name:mkdbfbf025d851ad41e5906da8e3f60b2fca69fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 05:05:53.127748 33397 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15909-3952/kubeconfig
I0223 05:05:53.128595 33397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3952/kubeconfig: {Name:mk020a20943d07a23d370631a6a005cb93b2bfc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 05:05:53.128853 33397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0223 05:05:53.129026 33397 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
I0223 05:05:53.129139 33397 config.go:182] Loaded profile config "pause-927729": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0223 05:05:53.130886 33397 out.go:177] * Enabled addons:
I0223 05:05:53.129204 33397 cache.go:107] acquiring lock: {Name:mk54ad0d75bf3dcb90076f913664fe0061ef6c1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0223 05:05:53.129659 33397 kapi.go:59] client config for pause-927729: &rest.Config{Host:"https://192.168.50.54:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3952/.minikube/profiles/pause-927729/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3952/.minikube/profiles/pause-927729/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3952/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0223 05:05:53.132283 33397 addons.go:492] enable addons completed in 3.264211ms: enabled=[]
I0223 05:05:53.132375 33397 cache.go:115] /home/jenkins/minikube-integration/15909-3952/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
I0223 05:05:53.132394 33397 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/15909-3952/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 3.191533ms
I0223 05:05:53.132403 33397 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/15909-3952/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
I0223 05:05:53.132412 33397 cache.go:87] Successfully saved all images to host disk.
I0223 05:05:53.132550 33397 config.go:182] Loaded profile config "pause-927729": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0223 05:05:53.132917 33397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0223 05:05:53.132947 33397 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 05:05:53.135383 33397 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-927729" context rescaled to 1 replicas
I0223 05:05:53.135409 33397 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.54 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0223 05:05:53.137318 33397 out.go:177] * Verifying Kubernetes components...
I0223 05:05:53.138747 33397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0223 05:05:53.149478 33397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43491
I0223 05:05:53.149842 33397 main.go:141] libmachine: () Calling .GetVersion
I0223 05:05:53.150347 33397 main.go:141] libmachine: Using API Version 1
I0223 05:05:53.150369 33397 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 05:05:53.150686 33397 main.go:141] libmachine: () Calling .GetMachineName
I0223 05:05:53.150871 33397 main.go:141] libmachine: (pause-927729) Calling .GetState
I0223 05:05:53.152453 33397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0223 05:05:53.152478 33397 main.go:141] libmachine: Launching plugin server for driver kvm2
I0223 05:05:53.166115 33397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41183
I0223 05:05:53.166452 33397 main.go:141] libmachine: () Calling .GetVersion
I0223 05:05:53.166892 33397 main.go:141] libmachine: Using API Version 1
I0223 05:05:53.166912 33397 main.go:141] libmachine: () Calling .SetConfigRaw
I0223 05:05:53.167196 33397 main.go:141] libmachine: () Calling .GetMachineName
I0223 05:05:53.167355 33397 main.go:141] libmachine: (pause-927729) Calling .DriverName
I0223 05:05:53.167521 33397 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0223 05:05:53.167540 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHHostname
I0223 05:05:53.170343 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:05:53.170748 33397 main.go:141] libmachine: (pause-927729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:c7:cd", ip: ""} in network mk-pause-927729: {Iface:virbr2 ExpiryTime:2023-02-23 06:03:34 +0000 UTC Type:0 Mac:52:54:00:33:c7:cd Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-927729 Clientid:01:52:54:00:33:c7:cd}
I0223 05:05:53.170773 33397 main.go:141] libmachine: (pause-927729) DBG | domain pause-927729 has defined IP address 192.168.50.54 and MAC address 52:54:00:33:c7:cd in network mk-pause-927729
I0223 05:05:53.170914 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHPort
I0223 05:05:53.171059 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHKeyPath
I0223 05:05:53.171196 33397 main.go:141] libmachine: (pause-927729) Calling .GetSSHUsername
I0223 05:05:53.171298 33397 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-3952/.minikube/machines/pause-927729/id_rsa Username:docker}
I0223 05:05:53.254613 33397 node_ready.go:35] waiting up to 6m0s for node "pause-927729" to be "Ready" ...
I0223 05:05:53.255025 33397 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0223 05:05:53.257660 33397 node_ready.go:49] node "pause-927729" has status "Ready":"True"
I0223 05:05:53.257679 33397 node_ready.go:38] duration metric: took 3.027096ms waiting for node "pause-927729" to be "Ready" ...
I0223 05:05:53.257689 33397 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0223 05:05:53.262482 33397 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-cglqd" in "kube-system" namespace to be "Ready" ...
I0223 05:05:53.267534 33397 pod_ready.go:92] pod "coredns-787d4945fb-cglqd" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:53.267550 33397 pod_ready.go:81] duration metric: took 5.047301ms waiting for pod "coredns-787d4945fb-cglqd" in "kube-system" namespace to be "Ready" ...
I0223 05:05:53.267561 33397 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:53.307130 33397 docker.go:630] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0223 05:05:53.307151 33397 cache_images.go:84] Images are preloaded, skipping loading
I0223 05:05:53.307160 33397 cache_images.go:262] succeeded pushing to: pause-927729
I0223 05:05:53.307164 33397 cache_images.go:263] failed pushing to:
I0223 05:05:53.307190 33397 main.go:141] libmachine: Making call to close driver server
I0223 05:05:53.307210 33397 main.go:141] libmachine: (pause-927729) Calling .Close
I0223 05:05:53.307480 33397 main.go:141] libmachine: Successfully made call to close driver server
I0223 05:05:53.307503 33397 main.go:141] libmachine: Making call to close connection to plugin binary
I0223 05:05:53.307509 33397 main.go:141] libmachine: (pause-927729) DBG | Closing plugin on server side
I0223 05:05:53.307517 33397 main.go:141] libmachine: Making call to close driver server
I0223 05:05:53.307528 33397 main.go:141] libmachine: (pause-927729) Calling .Close
I0223 05:05:53.307799 33397 main.go:141] libmachine: (pause-927729) DBG | Closing plugin on server side
I0223 05:05:53.307840 33397 main.go:141] libmachine: Successfully made call to close driver server
I0223 05:05:53.307850 33397 main.go:141] libmachine: Making call to close connection to plugin binary
I0223 05:05:55.278402 33397 pod_ready.go:102] pod "etcd-pause-927729" in "kube-system" namespace has status "Ready":"False"
I0223 05:05:53.199515 34009 main.go:141] libmachine: (kindnet-993481) DBG | domain kindnet-993481 has defined MAC address 52:54:00:58:13:f8 in network mk-kindnet-993481
I0223 05:05:53.199962 34009 main.go:141] libmachine: (kindnet-993481) DBG | unable to find current IP address of domain kindnet-993481 in network mk-kindnet-993481
I0223 05:05:53.199989 34009 main.go:141] libmachine: (kindnet-993481) DBG | I0223 05:05:53.199906 34211 retry.go:31] will retry after 4.617549218s: waiting for machine to come up
I0223 05:05:55.778729 33397 pod_ready.go:92] pod "etcd-pause-927729" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:55.778753 33397 pod_ready.go:81] duration metric: took 2.511184901s waiting for pod "etcd-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:55.778764 33397 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:55.783678 33397 pod_ready.go:92] pod "kube-apiserver-pause-927729" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:55.783698 33397 pod_ready.go:81] duration metric: took 4.924356ms waiting for pod "kube-apiserver-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:55.783705 33397 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:55.920175 33397 pod_ready.go:92] pod "kube-controller-manager-pause-927729" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:55.920200 33397 pod_ready.go:81] duration metric: took 136.488237ms waiting for pod "kube-controller-manager-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:55.920212 33397 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bxfpq" in "kube-system" namespace to be "Ready" ...
I0223 05:05:56.319947 33397 pod_ready.go:92] pod "kube-proxy-bxfpq" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:56.319965 33397 pod_ready.go:81] duration metric: took 399.745807ms waiting for pod "kube-proxy-bxfpq" in "kube-system" namespace to be "Ready" ...
I0223 05:05:56.319974 33397 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:56.720085 33397 pod_ready.go:92] pod "kube-scheduler-pause-927729" in "kube-system" namespace has status "Ready":"True"
I0223 05:05:56.720110 33397 pod_ready.go:81] duration metric: took 400.129893ms waiting for pod "kube-scheduler-pause-927729" in "kube-system" namespace to be "Ready" ...
I0223 05:05:56.720120 33397 pod_ready.go:38] duration metric: took 3.462416553s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0223 05:05:56.720142 33397 api_server.go:51] waiting for apiserver process to appear ...
I0223 05:05:56.720187 33397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0223 05:05:56.737268 33397 api_server.go:71] duration metric: took 3.601831215s to wait for apiserver process to appear ...
I0223 05:05:56.737294 33397 api_server.go:87] waiting for apiserver healthz status ...
I0223 05:05:56.737305 33397 api_server.go:252] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
I0223 05:05:56.744668 33397 api_server.go:278] https://192.168.50.54:8443/healthz returned 200:
ok
I0223 05:05:56.746405 33397 api_server.go:140] control plane version: v1.26.1
I0223 05:05:56.746425 33397 api_server.go:130] duration metric: took 9.125813ms to wait for apiserver health ...
I0223 05:05:56.746435 33397 system_pods.go:43] waiting for kube-system pods to appear ...
I0223 05:05:56.923345 33397 system_pods.go:59] 6 kube-system pods found
I0223 05:05:56.923371 33397 system_pods.go:61] "coredns-787d4945fb-cglqd" [1fa88fe5-60f1-431a-86be-e51eef3d0ad2] Running
I0223 05:05:56.923377 33397 system_pods.go:61] "etcd-pause-927729" [8eeb6d86-cfd5-4044-bbea-7cda3b2805c7] Running
I0223 05:05:56.923381 33397 system_pods.go:61] "kube-apiserver-pause-927729" [0659a506-2258-4f0c-a614-1ad5e31b6dd0] Running
I0223 05:05:56.923385 33397 system_pods.go:61] "kube-controller-manager-pause-927729" [8666d42f-2670-4612-b31e-1a090f2b49f3] Running
I0223 05:05:56.923389 33397 system_pods.go:61] "kube-proxy-bxfpq" [e88a33f8-6ea2-4841-8c3f-da34239da2ff] Running
I0223 05:05:56.923393 33397 system_pods.go:61] "kube-scheduler-pause-927729" [72fb1f70-51ea-4744-bae5-73655dd83967] Running
I0223 05:05:56.923398 33397 system_pods.go:74] duration metric: took 176.958568ms to wait for pod list to return data ...
I0223 05:05:56.923405 33397 default_sa.go:34] waiting for default service account to be created ...
I0223 05:05:57.119680 33397 default_sa.go:45] found service account: "default"
I0223 05:05:57.119703 33397 default_sa.go:55] duration metric: took 196.290064ms for default service account to be created ...
I0223 05:05:57.119712 33397 system_pods.go:116] waiting for k8s-apps to be running ...
I0223 05:05:57.322536 33397 system_pods.go:86] 6 kube-system pods found
I0223 05:05:57.322560 33397 system_pods.go:89] "coredns-787d4945fb-cglqd" [1fa88fe5-60f1-431a-86be-e51eef3d0ad2] Running
I0223 05:05:57.322565 33397 system_pods.go:89] "etcd-pause-927729" [8eeb6d86-cfd5-4044-bbea-7cda3b2805c7] Running
I0223 05:05:57.322569 33397 system_pods.go:89] "kube-apiserver-pause-927729" [0659a506-2258-4f0c-a614-1ad5e31b6dd0] Running
I0223 05:05:57.322573 33397 system_pods.go:89] "kube-controller-manager-pause-927729" [8666d42f-2670-4612-b31e-1a090f2b49f3] Running
I0223 05:05:57.322578 33397 system_pods.go:89] "kube-proxy-bxfpq" [e88a33f8-6ea2-4841-8c3f-da34239da2ff] Running
I0223 05:05:57.322582 33397 system_pods.go:89] "kube-scheduler-pause-927729" [72fb1f70-51ea-4744-bae5-73655dd83967] Running
I0223 05:05:57.322588 33397 system_pods.go:126] duration metric: took 202.871272ms to wait for k8s-apps to be running ...
I0223 05:05:57.322594 33397 system_svc.go:44] waiting for kubelet service to be running ....
I0223 05:05:57.322631 33397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0223 05:05:57.339233 33397 system_svc.go:56] duration metric: took 16.630061ms WaitForService to wait for kubelet.
I0223 05:05:57.339259 33397 kubeadm.go:578] duration metric: took 4.203826911s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0223 05:05:57.339282 33397 node_conditions.go:102] verifying NodePressure condition ...
I0223 05:05:57.520481 33397 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0223 05:05:57.520502 33397 node_conditions.go:123] node cpu capacity is 2
I0223 05:05:57.520512 33397 node_conditions.go:105] duration metric: took 181.223805ms to run NodePressure ...
I0223 05:05:57.520521 33397 start.go:228] waiting for startup goroutines ...
I0223 05:05:57.520527 33397 start.go:233] waiting for cluster config update ...
I0223 05:05:57.520534 33397 start.go:242] writing updated cluster config ...
I0223 05:05:57.520820 33397 ssh_runner.go:195] Run: rm -f paused
I0223 05:05:57.572198 33397 start.go:555] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
I0223 05:05:57.574315 33397 out.go:177] * Done! kubectl is now configured to use "pause-927729" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Journal begins at Thu 2023-02-23 05:03:31 UTC, ends at Thu 2023-02-23 05:06:00 UTC. --
Feb 23 05:05:34 pause-927729 dockerd[4634]: time="2023-02-23T05:05:34.954771696Z" level=info msg="ignoring event" container=33018ba60b473e190348ad0328a8273c998744966000205fadc023b43956693d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 23 05:05:34 pause-927729 dockerd[4640]: time="2023-02-23T05:05:34.957602475Z" level=info msg="shim disconnected" id=33018ba60b473e190348ad0328a8273c998744966000205fadc023b43956693d
Feb 23 05:05:34 pause-927729 dockerd[4640]: time="2023-02-23T05:05:34.957887139Z" level=warning msg="cleaning up after shim disconnected" id=33018ba60b473e190348ad0328a8273c998744966000205fadc023b43956693d namespace=moby
Feb 23 05:05:34 pause-927729 dockerd[4640]: time="2023-02-23T05:05:34.958028131Z" level=info msg="cleaning up dead shim"
Feb 23 05:05:34 pause-927729 dockerd[4640]: time="2023-02-23T05:05:34.973630400Z" level=warning msg="cleanup warnings time=\"2023-02-23T05:05:34Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6997 runtime=io.containerd.runc.v2\n"
Feb 23 05:05:37 pause-927729 dockerd[4640]: time="2023-02-23T05:05:37.505265034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 23 05:05:37 pause-927729 dockerd[4640]: time="2023-02-23T05:05:37.505672887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 23 05:05:37 pause-927729 dockerd[4640]: time="2023-02-23T05:05:37.505688154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 23 05:05:37 pause-927729 dockerd[4640]: time="2023-02-23T05:05:37.506711952Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/67052abc75baa2b6d06b8426f25ed9c733953b314fa336073605b2fca79c327a pid=7248 runtime=io.containerd.runc.v2
Feb 23 05:05:37 pause-927729 dockerd[4640]: time="2023-02-23T05:05:37.531424124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 23 05:05:37 pause-927729 dockerd[4640]: time="2023-02-23T05:05:37.531547968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 23 05:05:37 pause-927729 dockerd[4640]: time="2023-02-23T05:05:37.531564985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 23 05:05:37 pause-927729 dockerd[4640]: time="2023-02-23T05:05:37.531962274Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f1219eafb40d327312248676dd4191409b0daa7c50d561878b2ce7fa5a9df8c8 pid=7270 runtime=io.containerd.runc.v2
Feb 23 05:05:43 pause-927729 dockerd[4640]: time="2023-02-23T05:05:43.108977194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 23 05:05:43 pause-927729 dockerd[4640]: time="2023-02-23T05:05:43.109506575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 23 05:05:43 pause-927729 dockerd[4640]: time="2023-02-23T05:05:43.110034523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 23 05:05:43 pause-927729 dockerd[4640]: time="2023-02-23T05:05:43.119197199Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/be8ca09cb627a998bb3e931cf43c35f3a5ec94ad415e951851fcb8a1cf248aa7 pid=7433 runtime=io.containerd.runc.v2
Feb 23 05:05:43 pause-927729 dockerd[4640]: time="2023-02-23T05:05:43.481795486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 23 05:05:43 pause-927729 dockerd[4640]: time="2023-02-23T05:05:43.486274467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 23 05:05:43 pause-927729 dockerd[4640]: time="2023-02-23T05:05:43.486297328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 23 05:05:43 pause-927729 dockerd[4640]: time="2023-02-23T05:05:43.486601077Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/0665b7e62cd32d10fe5978db9ba2820375362e4d2588dbe4429175d881f65e65 pid=7494 runtime=io.containerd.runc.v2
Feb 23 05:05:44 pause-927729 dockerd[4640]: time="2023-02-23T05:05:44.210534837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 23 05:05:44 pause-927729 dockerd[4640]: time="2023-02-23T05:05:44.211065295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 23 05:05:44 pause-927729 dockerd[4640]: time="2023-02-23T05:05:44.211321375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 23 05:05:44 pause-927729 dockerd[4640]: time="2023-02-23T05:05:44.211862036Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/cea3c27a3a1fb90077cc1d176f586da18fe5d8552149e3d92bee8ffe8a99009f pid=7646 runtime=io.containerd.runc.v2
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
cea3c27a3a1fb 5185b96f0becf 16 seconds ago Running coredns 2 be8ca09cb627a
0665b7e62cd32 46a6bb3c77ce0 17 seconds ago Running kube-proxy 3 e241360610a43
f1219eafb40d3 655493523f607 23 seconds ago Running kube-scheduler 3 13a7cf699ccb2
67052abc75baa fce326961ae2d 23 seconds ago Running etcd 3 63fd8f1d363bb
32c0fdace4cd4 e9c08e11b07f6 28 seconds ago Running kube-controller-manager 2 a34774c9bd4fd
4ad65d61e105d deb04688c4a35 29 seconds ago Running kube-apiserver 2 f3859b77311b0
fe41c99a30b34 fce326961ae2d 45 seconds ago Exited etcd 2 148e1d336144f
12af3126a2cbb 46a6bb3c77ce0 48 seconds ago Exited kube-proxy 2 f863d7090eaa5
7e5efcad3eed3 655493523f607 57 seconds ago Exited kube-scheduler 2 bb6c25f55c667
33018ba60b473 5185b96f0becf About a minute ago Exited coredns 1 522be95290b4a
c8fa27ad88bdb deb04688c4a35 About a minute ago Exited kube-apiserver 1 bc49644ebb60f
6817b46b014b0 e9c08e11b07f6 About a minute ago Exited kube-controller-manager 1 4991904c98c51
*
* ==> coredns [33018ba60b47] <==
* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] 127.0.0.1:51103 - 40969 "HINFO IN 1009213339599387188.5718121639688052542. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030904561s
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:52204->10.96.0.1:443: read: connection reset by peer
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> coredns [cea3c27a3a1f] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] 127.0.0.1:50895 - 335 "HINFO IN 2992301098082177045.4030291826637785790. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.066325133s
*
* ==> describe nodes <==
* Name: pause-927729
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-927729
kubernetes.io/os=linux
minikube.k8s.io/commit=66d56dc3ac28a702789778ac47e90f12526a0321
minikube.k8s.io/name=pause-927729
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_02_23T05_04_12_0700
minikube.k8s.io/version=v1.29.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 23 Feb 2023 05:04:08 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-927729
AcquireTime: <unset>
RenewTime: Thu, 23 Feb 2023 05:05:51 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 23 Feb 2023 05:05:41 +0000 Thu, 23 Feb 2023 05:04:06 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 23 Feb 2023 05:05:41 +0000 Thu, 23 Feb 2023 05:04:06 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 23 Feb 2023 05:05:41 +0000 Thu, 23 Feb 2023 05:04:06 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 23 Feb 2023 05:05:41 +0000 Thu, 23 Feb 2023 05:04:15 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.50.54
Hostname: pause-927729
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017420Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017420Ki
pods: 110
System Info:
Machine ID: 0c8ab0825265425d8f78ffb269f0d4f2
System UUID: 0c8ab082-5265-425d-8f78-ffb269f0d4f2
Boot ID: b462c4ff-59d8-4639-a38e-5243334f9339
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.23
Kubelet Version: v1.26.1
Kube-Proxy Version: v1.26.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-787d4945fb-cglqd 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 96s
kube-system etcd-pause-927729 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 107s
kube-system kube-apiserver-pause-927729 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 107s
kube-system kube-controller-manager-pause-927729 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 107s
kube-system kube-proxy-bxfpq 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 96s
kube-system kube-scheduler-pause-927729 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 107s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 93s kube-proxy
Normal Starting 16s kube-proxy
Normal Starting 2m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m (x4 over 2m) kubelet Node pause-927729 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m (x4 over 2m) kubelet Node pause-927729 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m (x3 over 2m) kubelet Node pause-927729 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m kubelet Updated Node Allocatable limit across pods
Normal Starting 108s kubelet Starting kubelet.
Normal NodeHasSufficientPID 107s kubelet Node pause-927729 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 107s kubelet Node pause-927729 status is now: NodeHasNoDiskPressure
Normal NodeAllocatableEnforced 107s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 107s kubelet Node pause-927729 status is now: NodeHasSufficientMemory
Normal NodeReady 105s kubelet Node pause-927729 status is now: NodeReady
Normal RegisteredNode 97s node-controller Node pause-927729 event: Registered Node pause-927729 in Controller
Normal Starting 24s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 24s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 23s (x8 over 24s) kubelet Node pause-927729 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 23s (x8 over 24s) kubelet Node pause-927729 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 23s (x7 over 24s) kubelet Node pause-927729 status is now: NodeHasSufficientPID
Normal RegisteredNode 6s node-controller Node pause-927729 event: Registered Node pause-927729 in Controller
*
* ==> dmesg <==
* [ +1.968178] kauditd_printk_skb: 14 callbacks suppressed
[ +0.350592] systemd-fstab-generator[896]: Ignoring "noauto" for root device
[ +0.691114] systemd-fstab-generator[933]: Ignoring "noauto" for root device
[ +0.096727] systemd-fstab-generator[944]: Ignoring "noauto" for root device
[ +0.115312] systemd-fstab-generator[957]: Ignoring "noauto" for root device
[ +1.495616] systemd-fstab-generator[1105]: Ignoring "noauto" for root device
[ +0.106416] systemd-fstab-generator[1116]: Ignoring "noauto" for root device
[ +0.123044] systemd-fstab-generator[1127]: Ignoring "noauto" for root device
[ +0.107960] systemd-fstab-generator[1138]: Ignoring "noauto" for root device
[ +4.338395] systemd-fstab-generator[1390]: Ignoring "noauto" for root device
[Feb23 05:04] kauditd_printk_skb: 68 callbacks suppressed
[ +11.840835] systemd-fstab-generator[2307]: Ignoring "noauto" for root device
[ +14.285510] kauditd_printk_skb: 8 callbacks suppressed
[ +11.120445] kauditd_printk_skb: 26 callbacks suppressed
[ +7.294560] systemd-fstab-generator[3852]: Ignoring "noauto" for root device
[ +0.378753] systemd-fstab-generator[3883]: Ignoring "noauto" for root device
[ +0.181704] systemd-fstab-generator[3894]: Ignoring "noauto" for root device
[ +0.225128] systemd-fstab-generator[3907]: Ignoring "noauto" for root device
[ +8.362713] systemd-fstab-generator[5007]: Ignoring "noauto" for root device
[ +0.271036] systemd-fstab-generator[5025]: Ignoring "noauto" for root device
[ +0.248756] systemd-fstab-generator[5076]: Ignoring "noauto" for root device
[ +0.251288] systemd-fstab-generator[5123]: Ignoring "noauto" for root device
[ +2.178342] kauditd_printk_skb: 34 callbacks suppressed
[Feb23 05:05] kauditd_printk_skb: 3 callbacks suppressed
[ +23.233390] systemd-fstab-generator[7087]: Ignoring "noauto" for root device
*
* ==> etcd [67052abc75ba] <==
* {"level":"info","ts":"2023-02-23T05:05:38.102Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b7dc4198fc8444d0","local-member-id":"b0a6bbe4c9ddfbc1","cluster-version":"3.5"}
{"level":"info","ts":"2023-02-23T05:05:38.102Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-02-23T05:05:38.104Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.50.54:2380"}
{"level":"info","ts":"2023-02-23T05:05:38.104Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.50.54:2380"}
{"level":"info","ts":"2023-02-23T05:05:39.675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 is starting a new election at term 4"}
{"level":"info","ts":"2023-02-23T05:05:39.675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became pre-candidate at term 4"}
{"level":"info","ts":"2023-02-23T05:05:39.675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 received MsgPreVoteResp from b0a6bbe4c9ddfbc1 at term 4"}
{"level":"info","ts":"2023-02-23T05:05:39.675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became candidate at term 5"}
{"level":"info","ts":"2023-02-23T05:05:39.675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 received MsgVoteResp from b0a6bbe4c9ddfbc1 at term 5"}
{"level":"info","ts":"2023-02-23T05:05:39.675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became leader at term 5"}
{"level":"info","ts":"2023-02-23T05:05:39.675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b0a6bbe4c9ddfbc1 elected leader b0a6bbe4c9ddfbc1 at term 5"}
{"level":"info","ts":"2023-02-23T05:05:39.678Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-02-23T05:05:39.678Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b0a6bbe4c9ddfbc1","local-member-attributes":"{Name:pause-927729 ClientURLs:[https://192.168.50.54:2379]}","request-path":"/0/members/b0a6bbe4c9ddfbc1/attributes","cluster-id":"b7dc4198fc8444d0","publish-timeout":"7s"}
{"level":"info","ts":"2023-02-23T05:05:39.679Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-02-23T05:05:39.680Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.50.54:2379"}
{"level":"info","ts":"2023-02-23T05:05:39.681Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-02-23T05:05:39.682Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-02-23T05:05:39.682Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-02-23T05:05:45.596Z","caller":"traceutil/trace.go:171","msg":"trace[2124695069] linearizableReadLoop","detail":"{readStateIndex:475; appliedIndex:474; }","duration":"231.544809ms","start":"2023-02-23T05:05:45.364Z","end":"2023-02-23T05:05:45.596Z","steps":["trace[2124695069] 'read index received' (duration: 230.678871ms)","trace[2124695069] 'applied index is now lower than readState.Index' (duration: 864.72µs)"],"step_count":2}
{"level":"info","ts":"2023-02-23T05:05:45.596Z","caller":"traceutil/trace.go:171","msg":"trace[1413819236] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"334.377116ms","start":"2023-02-23T05:05:45.262Z","end":"2023-02-23T05:05:45.596Z","steps":["trace[1413819236] 'process raft request' (duration: 333.088717ms)"],"step_count":1}
{"level":"warn","ts":"2023-02-23T05:05:45.597Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-23T05:05:45.262Z","time spent":"335.149192ms","remote":"127.0.0.1:39256","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":676,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-scheduler-pause-927729.17465bb370527039\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-scheduler-pause-927729.17465bb370527039\" value_size:585 lease:8917556607150529160 >> failure:<>"}
{"level":"warn","ts":"2023-02-23T05:05:45.596Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"232.268878ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-bxfpq\" ","response":"range_response_count:1 size:4712"}
{"level":"info","ts":"2023-02-23T05:05:45.598Z","caller":"traceutil/trace.go:171","msg":"trace[335847640] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-bxfpq; range_end:; response_count:1; response_revision:437; }","duration":"233.839425ms","start":"2023-02-23T05:05:45.364Z","end":"2023-02-23T05:05:45.598Z","steps":["trace[335847640] 'agreement among raft nodes before linearized reading' (duration: 232.142656ms)"],"step_count":1}
{"level":"info","ts":"2023-02-23T05:05:45.894Z","caller":"traceutil/trace.go:171","msg":"trace[454170152] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"285.179489ms","start":"2023-02-23T05:05:45.609Z","end":"2023-02-23T05:05:45.894Z","steps":["trace[454170152] 'process raft request' (duration: 284.94493ms)"],"step_count":1}
{"level":"info","ts":"2023-02-23T05:05:45.894Z","caller":"traceutil/trace.go:171","msg":"trace[1506822538] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"287.32678ms","start":"2023-02-23T05:05:45.607Z","end":"2023-02-23T05:05:45.894Z","steps":["trace[1506822538] 'process raft request' (duration: 198.332979ms)","trace[1506822538] 'compare' (duration: 87.7327ms)"],"step_count":2}
*
* ==> etcd [fe41c99a30b3] <==
* {"level":"info","ts":"2023-02-23T05:05:15.663Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-02-23T05:05:15.663Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b0a6bbe4c9ddfbc1","initial-advertise-peer-urls":["https://192.168.50.54:2380"],"listen-peer-urls":["https://192.168.50.54:2380"],"advertise-client-urls":["https://192.168.50.54:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.54:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-02-23T05:05:15.663Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-02-23T05:05:15.664Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.50.54:2380"}
{"level":"info","ts":"2023-02-23T05:05:15.664Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.50.54:2380"}
{"level":"info","ts":"2023-02-23T05:05:17.045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 is starting a new election at term 3"}
{"level":"info","ts":"2023-02-23T05:05:17.046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became pre-candidate at term 3"}
{"level":"info","ts":"2023-02-23T05:05:17.046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 received MsgPreVoteResp from b0a6bbe4c9ddfbc1 at term 3"}
{"level":"info","ts":"2023-02-23T05:05:17.046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became candidate at term 4"}
{"level":"info","ts":"2023-02-23T05:05:17.046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 received MsgVoteResp from b0a6bbe4c9ddfbc1 at term 4"}
{"level":"info","ts":"2023-02-23T05:05:17.046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became leader at term 4"}
{"level":"info","ts":"2023-02-23T05:05:17.046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b0a6bbe4c9ddfbc1 elected leader b0a6bbe4c9ddfbc1 at term 4"}
{"level":"info","ts":"2023-02-23T05:05:17.052Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b0a6bbe4c9ddfbc1","local-member-attributes":"{Name:pause-927729 ClientURLs:[https://192.168.50.54:2379]}","request-path":"/0/members/b0a6bbe4c9ddfbc1/attributes","cluster-id":"b7dc4198fc8444d0","publish-timeout":"7s"}
{"level":"info","ts":"2023-02-23T05:05:17.052Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-02-23T05:05:17.052Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-02-23T05:05:17.054Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-02-23T05:05:17.054Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-02-23T05:05:17.054Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-02-23T05:05:17.054Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.50.54:2379"}
{"level":"info","ts":"2023-02-23T05:05:29.909Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2023-02-23T05:05:29.909Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-927729","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.54:2380"],"advertise-client-urls":["https://192.168.50.54:2379"]}
{"level":"info","ts":"2023-02-23T05:05:29.912Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b0a6bbe4c9ddfbc1","current-leader-member-id":"b0a6bbe4c9ddfbc1"}
{"level":"info","ts":"2023-02-23T05:05:29.916Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.50.54:2380"}
{"level":"info","ts":"2023-02-23T05:05:29.917Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.50.54:2380"}
{"level":"info","ts":"2023-02-23T05:05:29.917Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-927729","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.54:2380"],"advertise-client-urls":["https://192.168.50.54:2379"]}
*
* ==> kernel <==
* 05:06:00 up 2 min, 0 users, load average: 3.01, 1.36, 0.52
Linux pause-927729 5.10.57 #1 SMP Thu Feb 16 22:09:52 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [4ad65d61e105] <==
* I0223 05:05:41.551937 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0223 05:05:41.551992 1 crd_finalizer.go:266] Starting CRDFinalizer
I0223 05:05:41.452237 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0223 05:05:41.552256 1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
I0223 05:05:41.553259 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0223 05:05:41.553665 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0223 05:05:41.640727 1 shared_informer.go:280] Caches are synced for node_authorizer
I0223 05:05:41.646379 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0223 05:05:41.647196 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0223 05:05:41.652306 1 shared_informer.go:280] Caches are synced for crd-autoregister
I0223 05:05:41.652651 1 cache.go:39] Caches are synced for autoregister controller
I0223 05:05:41.657243 1 apf_controller.go:366] Running API Priority and Fairness config worker
I0223 05:05:41.657376 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
I0223 05:05:41.657479 1 shared_informer.go:280] Caches are synced for configmaps
I0223 05:05:41.665620 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
I0223 05:05:41.673057 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0223 05:05:42.179182 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0223 05:05:42.463276 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0223 05:05:43.343033 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0223 05:05:43.374901 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0223 05:05:43.442041 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0223 05:05:43.491772 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0223 05:05:43.502480 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0223 05:05:54.645861 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0223 05:05:54.840623 1 controller.go:615] quota admission added evaluator for: endpoints
*
* ==> kube-apiserver [c8fa27ad88bd] <==
* W0223 05:05:09.074259 1 logging.go:59] [core] [Channel #4 SubChannel #5] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0223 05:05:09.320226 1 logging.go:59] [core] [Channel #3 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0223 05:05:13.976866 1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
E0223 05:05:18.851261 1 run.go:74] "command failed" err="context deadline exceeded"
*
* ==> kube-controller-manager [32c0fdace4cd] <==
* I0223 05:05:54.639310 1 shared_informer.go:280] Caches are synced for deployment
I0223 05:05:54.640606 1 shared_informer.go:280] Caches are synced for HPA
I0223 05:05:54.641297 1 shared_informer.go:280] Caches are synced for persistent volume
I0223 05:05:54.641408 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kubelet-serving
I0223 05:05:54.641476 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kubelet-client
I0223 05:05:54.641491 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0223 05:05:54.641517 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-legacy-unknown
I0223 05:05:54.642618 1 shared_informer.go:280] Caches are synced for ClusterRoleAggregator
I0223 05:05:54.646064 1 shared_informer.go:280] Caches are synced for endpoint
I0223 05:05:54.676555 1 shared_informer.go:280] Caches are synced for ReplicaSet
I0223 05:05:54.730461 1 shared_informer.go:280] Caches are synced for taint
I0223 05:05:54.730870 1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone:
W0223 05:05:54.731197 1 node_lifecycle_controller.go:1053] Missing timestamp for Node pause-927729. Assuming now as a timestamp.
I0223 05:05:54.731350 1 event.go:294] "Event occurred" object="pause-927729" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-927729 event: Registered Node pause-927729 in Controller"
I0223 05:05:54.730935 1 taint_manager.go:206] "Starting NoExecuteTaintManager"
I0223 05:05:54.731414 1 taint_manager.go:211] "Sending events to api server"
I0223 05:05:54.731501 1 node_lifecycle_controller.go:1254] Controller detected that zone is now in state Normal.
I0223 05:05:54.741021 1 shared_informer.go:280] Caches are synced for daemon sets
I0223 05:05:54.745822 1 shared_informer.go:280] Caches are synced for resource quota
I0223 05:05:54.768159 1 shared_informer.go:280] Caches are synced for service account
I0223 05:05:54.772630 1 shared_informer.go:280] Caches are synced for resource quota
I0223 05:05:54.817938 1 shared_informer.go:280] Caches are synced for namespace
I0223 05:05:55.187820 1 shared_informer.go:280] Caches are synced for garbage collector
I0223 05:05:55.189587 1 shared_informer.go:280] Caches are synced for garbage collector
I0223 05:05:55.189607 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-controller-manager [6817b46b014b] <==
* I0223 05:04:58.701655 1 serving.go:348] Generated self-signed cert in-memory
I0223 05:04:59.247798 1 controllermanager.go:182] Version: v1.26.1
I0223 05:04:59.247932 1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0223 05:04:59.249410 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0223 05:04:59.249715 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0223 05:04:59.250362 1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0223 05:04:59.250537 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
F0223 05:05:19.859873 1 controllermanager.go:228] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get "https://192.168.50.54:8443/healthz": dial tcp 192.168.50.54:8443: connect: connection refused
*
* ==> kube-proxy [0665b7e62cd3] <==
* I0223 05:05:43.774665 1 node.go:163] Successfully retrieved node IP: 192.168.50.54
I0223 05:05:43.774787 1 server_others.go:109] "Detected node IP" address="192.168.50.54"
I0223 05:05:43.774843 1 server_others.go:535] "Using iptables proxy"
I0223 05:05:43.834360 1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0223 05:05:43.834658 1 server_others.go:176] "Using iptables Proxier"
I0223 05:05:43.834993 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0223 05:05:43.837666 1 server.go:655] "Version info" version="v1.26.1"
I0223 05:05:43.837931 1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0223 05:05:43.839389 1 config.go:317] "Starting service config controller"
I0223 05:05:43.842151 1 shared_informer.go:273] Waiting for caches to sync for service config
I0223 05:05:43.842587 1 config.go:444] "Starting node config controller"
I0223 05:05:43.842802 1 shared_informer.go:273] Waiting for caches to sync for node config
I0223 05:05:43.842334 1 config.go:226] "Starting endpoint slice config controller"
I0223 05:05:43.843389 1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
I0223 05:05:43.943020 1 shared_informer.go:280] Caches are synced for service config
I0223 05:05:43.943162 1 shared_informer.go:280] Caches are synced for node config
I0223 05:05:43.944065 1 shared_informer.go:280] Caches are synced for endpoint slice config
*
* ==> kube-proxy [12af3126a2cb] <==
* E0223 05:05:19.864338 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-927729": dial tcp 192.168.50.54:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.50.54:60060->192.168.50.54:8443: read: connection reset by peer
E0223 05:05:21.037339 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-927729": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:23.276901 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-927729": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:27.908067 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-927729": dial tcp 192.168.50.54:8443: connect: connection refused
*
* ==> kube-scheduler [7e5efcad3eed] <==
* W0223 05:05:28.006725 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.50.54:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:28.006908 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.50.54:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
W0223 05:05:28.051685 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.50.54:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:28.051727 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.50.54:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
W0223 05:05:28.162815 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.50.54:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:28.162922 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.50.54:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
W0223 05:05:28.184282 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.50.54:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:28.184492 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.50.54:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
W0223 05:05:28.190305 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.50.54:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:28.190362 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.50.54:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
W0223 05:05:28.479442 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.50.54:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:28.479679 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.50.54:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
W0223 05:05:28.675030 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.50.54:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:28.675357 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.50.54:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
W0223 05:05:29.113302 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.50.54:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:29.113374 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.50.54:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
W0223 05:05:29.377356 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.50.54:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:29.377460 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.50.54:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
W0223 05:05:29.614263 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.50.54:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:29.614361 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.50.54:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
E0223 05:05:29.874813 1 shared_informer.go:276] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0223 05:05:29.875020 1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0223 05:05:29.875041 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
I0223 05:05:29.875178 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0223 05:05:29.875619 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kube-scheduler [f1219eafb40d] <==
* I0223 05:05:38.432427 1 serving.go:348] Generated self-signed cert in-memory
W0223 05:05:41.536038 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0223 05:05:41.536208 1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0223 05:05:41.536305 1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
W0223 05:05:41.536318 1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0223 05:05:41.588649 1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
I0223 05:05:41.588923 1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0223 05:05:41.590425 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0223 05:05:41.590635 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0223 05:05:41.593177 1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0223 05:05:41.590721 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0223 05:05:41.694296 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Thu 2023-02-23 05:03:31 UTC, ends at Thu 2023-02-23 05:06:00 UTC. --
Feb 23 05:05:37 pause-927729 kubelet[7093]: I0223 05:05:37.112265 7093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b4537c914b45096bfec5d8188475986-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-927729\" (UID: \"1b4537c914b45096bfec5d8188475986\") " pod="kube-system/kube-controller-manager-pause-927729"
Feb 23 05:05:37 pause-927729 kubelet[7093]: I0223 05:05:37.112284 7093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/be6c04c3fb0b04f0c852689a8916af55-kubeconfig\") pod \"kube-scheduler-pause-927729\" (UID: \"be6c04c3fb0b04f0c852689a8916af55\") " pod="kube-system/kube-scheduler-pause-927729"
Feb 23 05:05:37 pause-927729 kubelet[7093]: I0223 05:05:37.358376 7093 scope.go:115] "RemoveContainer" containerID="fe41c99a30b348948eb347779ee709645cd627e4c497e0c26c603db96d371aed"
Feb 23 05:05:37 pause-927729 kubelet[7093]: I0223 05:05:37.405330 7093 scope.go:115] "RemoveContainer" containerID="7e5efcad3eed3918a5a13f1d264b9c19e4319eebbb86dae541c9c04851d859a3"
Feb 23 05:05:38 pause-927729 kubelet[7093]: I0223 05:05:38.181784 7093 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="522be95290b4a6985b0a8bbfea0310765aeff79bd30689bcf0ebb3155611fa79"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.660214 7093 kubelet_node_status.go:108] "Node was previously registered" node="pause-927729"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.660542 7093 kubelet_node_status.go:73] "Successfully registered node" node="pause-927729"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.663883 7093 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.665506 7093 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.671847 7093 apiserver.go:52] "Watching apiserver"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.765771 7093 topology_manager.go:210] "Topology Admit Handler"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.766070 7093 topology_manager.go:210] "Topology Admit Handler"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.807387 7093 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.844206 7093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrsjr\" (UniqueName: \"kubernetes.io/projected/1fa88fe5-60f1-431a-86be-e51eef3d0ad2-kube-api-access-jrsjr\") pod \"coredns-787d4945fb-cglqd\" (UID: \"1fa88fe5-60f1-431a-86be-e51eef3d0ad2\") " pod="kube-system/coredns-787d4945fb-cglqd"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.844386 7093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e88a33f8-6ea2-4841-8c3f-da34239da2ff-kube-proxy\") pod \"kube-proxy-bxfpq\" (UID: \"e88a33f8-6ea2-4841-8c3f-da34239da2ff\") " pod="kube-system/kube-proxy-bxfpq"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.844544 7093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e88a33f8-6ea2-4841-8c3f-da34239da2ff-lib-modules\") pod \"kube-proxy-bxfpq\" (UID: \"e88a33f8-6ea2-4841-8c3f-da34239da2ff\") " pod="kube-system/kube-proxy-bxfpq"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.844764 7093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7c8f\" (UniqueName: \"kubernetes.io/projected/e88a33f8-6ea2-4841-8c3f-da34239da2ff-kube-api-access-m7c8f\") pod \"kube-proxy-bxfpq\" (UID: \"e88a33f8-6ea2-4841-8c3f-da34239da2ff\") " pod="kube-system/kube-proxy-bxfpq"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.844833 7093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1fa88fe5-60f1-431a-86be-e51eef3d0ad2-config-volume\") pod \"coredns-787d4945fb-cglqd\" (UID: \"1fa88fe5-60f1-431a-86be-e51eef3d0ad2\") " pod="kube-system/coredns-787d4945fb-cglqd"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.844982 7093 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e88a33f8-6ea2-4841-8c3f-da34239da2ff-xtables-lock\") pod \"kube-proxy-bxfpq\" (UID: \"e88a33f8-6ea2-4841-8c3f-da34239da2ff\") " pod="kube-system/kube-proxy-bxfpq"
Feb 23 05:05:41 pause-927729 kubelet[7093]: I0223 05:05:41.845066 7093 reconciler.go:41] "Reconciler: start to sync state"
Feb 23 05:05:42 pause-927729 kubelet[7093]: I0223 05:05:42.962817 7093 request.go:690] Waited for 1.014336598s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token
Feb 23 05:05:43 pause-927729 kubelet[7093]: I0223 05:05:43.267542 7093 scope.go:115] "RemoveContainer" containerID="12af3126a2cbbe02774da3479e9bc53e57505e80ee6c6605aa4f4e5ee3b48527"
Feb 23 05:05:44 pause-927729 kubelet[7093]: I0223 05:05:44.057639 7093 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be8ca09cb627a998bb3e931cf43c35f3a5ec94ad415e951851fcb8a1cf248aa7"
Feb 23 05:05:46 pause-927729 kubelet[7093]: I0223 05:05:46.125291 7093 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
Feb 23 05:05:49 pause-927729 kubelet[7093]: I0223 05:05:49.400349 7093 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-927729 -n pause-927729
helpers_test.go:261: (dbg) Run: kubectl --context pause-927729 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (81.53s)