=== RUN TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run: out/minikube-linux-amd64 start -p pause-939189 --alsologtostderr -v=1 --driver=kvm2
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-939189 --alsologtostderr -v=1 --driver=kvm2 : (1m31.447901144s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got:
-- stdout --
* [pause-939189] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=16144
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/16144-3494/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/16144-3494/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on existing profile
* Starting control plane node pause-939189 in cluster pause-939189
* Updating the running kvm2 "pause-939189" VM ...
* Preparing Kubernetes v1.26.3 on Docker 20.10.23 ...
* Configuring bridge CNI (Container Networking Interface) ...
* Enabled addons:
* Verifying Kubernetes components...
* Done! kubectl is now configured to use "pause-939189" cluster and "default" namespace by default
-- /stdout --
** stderr **
I0331 18:03:25.711833 32536 out.go:296] Setting OutFile to fd 1 ...
I0331 18:03:25.712008 32536 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0331 18:03:25.712026 32536 out.go:309] Setting ErrFile to fd 2...
I0331 18:03:25.712033 32536 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0331 18:03:25.712166 32536 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16144-3494/.minikube/bin
I0331 18:03:25.712806 32536 out.go:303] Setting JSON to false
I0331 18:03:25.713974 32536 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2757,"bootTime":1680283049,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1031-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0331 18:03:25.714062 32536 start.go:135] virtualization: kvm guest
I0331 18:03:25.717124 32536 out.go:177] * [pause-939189] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0331 18:03:25.718745 32536 notify.go:220] Checking for updates...
I0331 18:03:25.718754 32536 out.go:177] - MINIKUBE_LOCATION=16144
I0331 18:03:25.720301 32536 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0331 18:03:25.721911 32536 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/16144-3494/kubeconfig
I0331 18:03:25.723493 32536 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/16144-3494/.minikube
I0331 18:03:25.725094 32536 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0331 18:03:25.726699 32536 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0331 18:03:25.728858 32536 config.go:182] Loaded profile config "pause-939189": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0331 18:03:25.729256 32536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0331 18:03:25.729306 32536 main.go:141] libmachine: Launching plugin server for driver kvm2
I0331 18:03:25.747285 32536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45791
I0331 18:03:25.747792 32536 main.go:141] libmachine: () Calling .GetVersion
I0331 18:03:25.748496 32536 main.go:141] libmachine: Using API Version 1
I0331 18:03:25.748525 32536 main.go:141] libmachine: () Calling .SetConfigRaw
I0331 18:03:25.749043 32536 main.go:141] libmachine: () Calling .GetMachineName
I0331 18:03:25.749253 32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
I0331 18:03:25.749440 32536 driver.go:365] Setting default libvirt URI to qemu:///system
I0331 18:03:25.749869 32536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0331 18:03:25.749913 32536 main.go:141] libmachine: Launching plugin server for driver kvm2
I0331 18:03:25.769314 32536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
I0331 18:03:25.769804 32536 main.go:141] libmachine: () Calling .GetVersion
I0331 18:03:25.770315 32536 main.go:141] libmachine: Using API Version 1
I0331 18:03:25.770364 32536 main.go:141] libmachine: () Calling .SetConfigRaw
I0331 18:03:25.770719 32536 main.go:141] libmachine: () Calling .GetMachineName
I0331 18:03:25.770905 32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
I0331 18:03:25.813738 32536 out.go:177] * Using the kvm2 driver based on existing profile
I0331 18:03:25.815408 32536 start.go:295] selected driver: kvm2
I0331 18:03:25.815426 32536 start.go:859] validating driver "kvm2" against &{Name:pause-939189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16191/minikube-v1.29.0-1680115329-16191-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 C
lusterName:pause-939189 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer
:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0331 18:03:25.815625 32536 start.go:870] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0331 18:03:25.816023 32536 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0331 18:03:25.816128 32536 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16144-3494/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0331 18:03:25.833164 32536 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
I0331 18:03:25.833976 32536 cni.go:84] Creating CNI manager for ""
I0331 18:03:25.834011 32536 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0331 18:03:25.834024 32536 start_flags.go:319] config:
{Name:pause-939189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16191/minikube-v1.29.0-1680115329-16191-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:pause-939189 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-prov
isioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0331 18:03:25.834220 32536 iso.go:125] acquiring lock: {Name:mk48583bcdf05c8e72651ed56790356a32c028b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0331 18:03:25.836510 32536 out.go:177] * Starting control plane node pause-939189 in cluster pause-939189
I0331 18:03:25.837952 32536 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
I0331 18:03:25.838005 32536 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16144-3494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4
I0331 18:03:25.838024 32536 cache.go:57] Caching tarball of preloaded images
I0331 18:03:25.838124 32536 preload.go:174] Found /home/jenkins/minikube-integration/16144-3494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0331 18:03:25.838137 32536 cache.go:60] Finished verifying existence of preloaded tar for v1.26.3 on docker
I0331 18:03:25.838332 32536 profile.go:148] Saving config to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/config.json ...
I0331 18:03:25.838550 32536 cache.go:193] Successfully downloaded all kic artifacts
I0331 18:03:25.838577 32536 start.go:364] acquiring machines lock for pause-939189: {Name:mkfdc5208de17d93700ea90324b4f36214eab469 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0331 18:03:40.264580 32536 start.go:368] acquired machines lock for "pause-939189" in 14.425951672s
I0331 18:03:40.264632 32536 start.go:96] Skipping create...Using existing machine configuration
I0331 18:03:40.264640 32536 fix.go:55] fixHost starting:
I0331 18:03:40.265105 32536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0331 18:03:40.265146 32536 main.go:141] libmachine: Launching plugin server for driver kvm2
I0331 18:03:40.284631 32536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41471
I0331 18:03:40.285088 32536 main.go:141] libmachine: () Calling .GetVersion
I0331 18:03:40.285618 32536 main.go:141] libmachine: Using API Version 1
I0331 18:03:40.285642 32536 main.go:141] libmachine: () Calling .SetConfigRaw
I0331 18:03:40.285948 32536 main.go:141] libmachine: () Calling .GetMachineName
I0331 18:03:40.286159 32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
I0331 18:03:40.286413 32536 main.go:141] libmachine: (pause-939189) Calling .GetState
I0331 18:03:40.288318 32536 fix.go:103] recreateIfNeeded on pause-939189: state=Running err=<nil>
W0331 18:03:40.288341 32536 fix.go:129] unexpected machine state, will restart: <nil>
I0331 18:03:40.292995 32536 out.go:177] * Updating the running kvm2 "pause-939189" VM ...
I0331 18:03:40.294650 32536 machine.go:88] provisioning docker machine ...
I0331 18:03:40.294679 32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
I0331 18:03:40.294921 32536 main.go:141] libmachine: (pause-939189) Calling .GetMachineName
I0331 18:03:40.295097 32536 buildroot.go:166] provisioning hostname "pause-939189"
I0331 18:03:40.295117 32536 main.go:141] libmachine: (pause-939189) Calling .GetMachineName
I0331 18:03:40.295290 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
I0331 18:03:40.298785 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:40.299195 32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
I0331 18:03:40.299228 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:40.299474 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
I0331 18:03:40.299722 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
I0331 18:03:40.299872 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
I0331 18:03:40.300020 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
I0331 18:03:40.300164 32536 main.go:141] libmachine: Using SSH client type: native
I0331 18:03:40.300581 32536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 192.168.39.142 22 <nil> <nil>}
I0331 18:03:40.300595 32536 main.go:141] libmachine: About to run SSH command:
sudo hostname pause-939189 && echo "pause-939189" | sudo tee /etc/hostname
I0331 18:03:40.446699 32536 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-939189
I0331 18:03:40.446732 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
I0331 18:03:40.450226 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:40.450649 32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
I0331 18:03:40.450686 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:40.450929 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
I0331 18:03:40.451154 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
I0331 18:03:40.451364 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
I0331 18:03:40.451533 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
I0331 18:03:40.451710 32536 main.go:141] libmachine: Using SSH client type: native
I0331 18:03:40.452300 32536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 192.168.39.142 22 <nil> <nil>}
I0331 18:03:40.452330 32536 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\spause-939189' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-939189/g' /etc/hosts;
else
echo '127.0.1.1 pause-939189' | sudo tee -a /etc/hosts;
fi
fi
I0331 18:03:40.582080 32536 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0331 18:03:40.582121 32536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16144-3494/.minikube CaCertPath:/home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16144-3494/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16144-3494/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16144-3494/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16144-3494/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16144-3494/.minikube}
I0331 18:03:40.582242 32536 buildroot.go:174] setting up certificates
I0331 18:03:40.582281 32536 provision.go:83] configureAuth start
I0331 18:03:40.582305 32536 main.go:141] libmachine: (pause-939189) Calling .GetMachineName
I0331 18:03:40.582633 32536 main.go:141] libmachine: (pause-939189) Calling .GetIP
I0331 18:03:40.587018 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:40.587650 32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
I0331 18:03:40.587681 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:40.588077 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
I0331 18:03:40.598852 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:40.599660 32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
I0331 18:03:40.599795 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:40.600229 32536 provision.go:138] copyHostCerts
I0331 18:03:40.600299 32536 exec_runner.go:144] found /home/jenkins/minikube-integration/16144-3494/.minikube/cert.pem, removing ...
I0331 18:03:40.600312 32536 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16144-3494/.minikube/cert.pem
I0331 18:03:40.600381 32536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16144-3494/.minikube/cert.pem (1123 bytes)
I0331 18:03:40.600543 32536 exec_runner.go:144] found /home/jenkins/minikube-integration/16144-3494/.minikube/key.pem, removing ...
I0331 18:03:40.600551 32536 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16144-3494/.minikube/key.pem
I0331 18:03:40.600587 32536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16144-3494/.minikube/key.pem (1679 bytes)
I0331 18:03:40.600675 32536 exec_runner.go:144] found /home/jenkins/minikube-integration/16144-3494/.minikube/ca.pem, removing ...
I0331 18:03:40.600681 32536 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16144-3494/.minikube/ca.pem
I0331 18:03:40.600708 32536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16144-3494/.minikube/ca.pem (1078 bytes)
I0331 18:03:40.600770 32536 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16144-3494/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca-key.pem org=jenkins.pause-939189 san=[192.168.39.142 192.168.39.142 localhost 127.0.0.1 minikube pause-939189]
I0331 18:03:40.860159 32536 provision.go:172] copyRemoteCerts
I0331 18:03:40.860253 32536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0331 18:03:40.860291 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
I0331 18:03:40.864535 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:40.865012 32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
I0331 18:03:40.865057 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:40.865401 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
I0331 18:03:40.865633 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
I0331 18:03:40.865835 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
I0331 18:03:40.866014 32536 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16144-3494/.minikube/machines/pause-939189/id_rsa Username:docker}
I0331 18:03:40.969464 32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0331 18:03:41.034639 32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
I0331 18:03:41.070577 32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0331 18:03:41.113651 32536 provision.go:86] duration metric: configureAuth took 531.350646ms
I0331 18:03:41.113705 32536 buildroot.go:189] setting minikube options for container-runtime
I0331 18:03:41.113981 32536 config.go:182] Loaded profile config "pause-939189": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0331 18:03:41.114013 32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
I0331 18:03:41.115580 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
I0331 18:03:41.119107 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:41.119579 32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
I0331 18:03:41.119615 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:41.120112 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
I0331 18:03:41.120296 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
I0331 18:03:41.120454 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
I0331 18:03:41.120602 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
I0331 18:03:41.120761 32536 main.go:141] libmachine: Using SSH client type: native
I0331 18:03:41.121332 32536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 192.168.39.142 22 <nil> <nil>}
I0331 18:03:41.121346 32536 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0331 18:03:41.283583 32536 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0331 18:03:41.283617 32536 buildroot.go:70] root file system type: tmpfs
I0331 18:03:41.283796 32536 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0331 18:03:41.283838 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
I0331 18:03:41.287411 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:41.287886 32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
I0331 18:03:41.287925 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:41.288483 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
I0331 18:03:41.288709 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
I0331 18:03:41.288961 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
I0331 18:03:41.289148 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
I0331 18:03:41.289395 32536 main.go:141] libmachine: Using SSH client type: native
I0331 18:03:41.289940 32536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 192.168.39.142 22 <nil> <nil>}
I0331 18:03:41.290035 32536 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0331 18:03:41.461458 32536 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0331 18:03:41.461497 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
I0331 18:03:41.464975 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:41.465415 32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
I0331 18:03:41.465442 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:41.465895 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
I0331 18:03:41.466145 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
I0331 18:03:41.466339 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
I0331 18:03:41.466475 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
I0331 18:03:41.466670 32536 main.go:141] libmachine: Using SSH client type: native
I0331 18:03:41.467276 32536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 192.168.39.142 22 <nil> <nil>}
I0331 18:03:41.467308 32536 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0331 18:03:41.624909 32536 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0331 18:03:41.624937 32536 machine.go:91] provisioned docker machine in 1.330271176s
I0331 18:03:41.624961 32536 start.go:300] post-start starting for "pause-939189" (driver="kvm2")
I0331 18:03:41.624970 32536 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0331 18:03:41.624996 32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
I0331 18:03:41.625358 32536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0331 18:03:41.625392 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
I0331 18:03:41.629902 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:41.630339 32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
I0331 18:03:41.630372 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:41.630727 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
I0331 18:03:41.630956 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
I0331 18:03:41.631134 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
I0331 18:03:41.631289 32536 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16144-3494/.minikube/machines/pause-939189/id_rsa Username:docker}
I0331 18:03:41.759900 32536 ssh_runner.go:195] Run: cat /etc/os-release
I0331 18:03:41.776514 32536 info.go:137] Remote host: Buildroot 2021.02.12
I0331 18:03:41.776548 32536 filesync.go:126] Scanning /home/jenkins/minikube-integration/16144-3494/.minikube/addons for local assets ...
I0331 18:03:41.776627 32536 filesync.go:126] Scanning /home/jenkins/minikube-integration/16144-3494/.minikube/files for local assets ...
I0331 18:03:41.776731 32536 filesync.go:149] local asset: /home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/ssl/certs/105402.pem -> 105402.pem in /etc/ssl/certs
I0331 18:03:41.776862 32536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0331 18:03:41.790408 32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/ssl/certs/105402.pem --> /etc/ssl/certs/105402.pem (1708 bytes)
I0331 18:03:41.835349 32536 start.go:303] post-start completed in 210.36981ms
I0331 18:03:41.835375 32536 fix.go:57] fixHost completed within 1.570735042s
I0331 18:03:41.835400 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
I0331 18:03:41.838925 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:41.839492 32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
I0331 18:03:41.839523 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:41.839837 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
I0331 18:03:41.840052 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
I0331 18:03:41.840238 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
I0331 18:03:41.840382 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
I0331 18:03:41.840575 32536 main.go:141] libmachine: Using SSH client type: native
I0331 18:03:41.841179 32536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 192.168.39.142 22 <nil> <nil>}
I0331 18:03:41.841201 32536 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0331 18:03:41.993866 32536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1680285821.989568532
I0331 18:03:41.993892 32536 fix.go:207] guest clock: 1680285821.989568532
I0331 18:03:41.993903 32536 fix.go:220] Guest: 2023-03-31 18:03:41.989568532 +0000 UTC Remote: 2023-03-31 18:03:41.835379949 +0000 UTC m=+16.167388203 (delta=154.188583ms)
I0331 18:03:41.993945 32536 fix.go:191] guest clock delta is within tolerance: 154.188583ms
I0331 18:03:41.993956 32536 start.go:83] releasing machines lock for "pause-939189", held for 1.729345955s
I0331 18:03:41.993982 32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
I0331 18:03:41.994291 32536 main.go:141] libmachine: (pause-939189) Calling .GetIP
I0331 18:03:41.997554 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:41.998095 32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
I0331 18:03:41.998131 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:41.998487 32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
I0331 18:03:41.999887 32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
I0331 18:03:42.000164 32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
I0331 18:03:42.000253 32536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0331 18:03:42.000300 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
I0331 18:03:42.000726 32536 ssh_runner.go:195] Run: cat /version.json
I0331 18:03:42.000773 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
I0331 18:03:42.004537 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:42.005631 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:42.006143 32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
I0331 18:03:42.006178 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:42.006623 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
I0331 18:03:42.006868 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
I0331 18:03:42.007059 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
I0331 18:03:42.007123 32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
I0331 18:03:42.007140 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:42.007276 32536 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16144-3494/.minikube/machines/pause-939189/id_rsa Username:docker}
I0331 18:03:42.008030 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
I0331 18:03:42.008201 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
I0331 18:03:42.008351 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
I0331 18:03:42.008558 32536 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16144-3494/.minikube/machines/pause-939189/id_rsa Username:docker}
I0331 18:03:42.135551 32536 ssh_runner.go:195] Run: systemctl --version
I0331 18:03:42.144080 32536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0331 18:03:42.152644 32536 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0331 18:03:42.152727 32536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0331 18:03:42.167739 32536 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0331 18:03:42.167766 32536 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
I0331 18:03:42.167860 32536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0331 18:03:42.215918 32536 docker.go:639] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.26.3
registry.k8s.io/kube-controller-manager:v1.26.3
registry.k8s.io/kube-scheduler:v1.26.3
registry.k8s.io/kube-proxy:v1.26.3
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0331 18:03:42.215946 32536 docker.go:569] Images already preloaded, skipping extraction
I0331 18:03:42.215958 32536 start.go:481] detecting cgroup driver to use...
I0331 18:03:42.216072 32536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0331 18:03:42.242406 32536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0331 18:03:42.257253 32536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0331 18:03:42.277189 32536 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0331 18:03:42.277247 32536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0331 18:03:42.292009 32536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0331 18:03:42.305328 32536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0331 18:03:42.319260 32536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0331 18:03:42.332370 32536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0331 18:03:42.344253 32536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0331 18:03:42.355124 32536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0331 18:03:42.367913 32536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0331 18:03:42.378810 32536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0331 18:03:42.566800 32536 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0331 18:03:42.599727 32536 start.go:481] detecting cgroup driver to use...
I0331 18:03:42.599836 32536 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0331 18:03:42.622523 32536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0331 18:03:42.644613 32536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0331 18:03:42.673317 32536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0331 18:03:42.692314 32536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0331 18:03:42.714726 32536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0331 18:03:42.742474 32536 ssh_runner.go:195] Run: which cri-dockerd
I0331 18:03:42.748476 32536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0331 18:03:42.761011 32536 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0331 18:03:42.787174 32536 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0331 18:03:43.008370 32536 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0331 18:03:43.207569 32536 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
I0331 18:03:43.207603 32536 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0331 18:03:43.230378 32536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0331 18:03:43.434598 32536 ssh_runner.go:195] Run: sudo systemctl restart docker
I0331 18:03:54.846657 32536 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.412025511s)
I0331 18:03:54.847101 32536 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0331 18:03:54.987722 32536 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0331 18:03:55.158813 32536 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0331 18:03:55.315724 32536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0331 18:03:55.501954 32536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0331 18:03:55.539554 32536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0331 18:03:55.719919 32536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I0331 18:03:56.206087 32536 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0331 18:03:56.206166 32536 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0331 18:03:56.220258 32536 start.go:549] Will wait 60s for crictl version
I0331 18:03:56.220332 32536 ssh_runner.go:195] Run: which crictl
I0331 18:03:56.228546 32536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0331 18:03:56.421849 32536 start.go:565] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.23
RuntimeApiVersion: v1alpha2
I0331 18:03:56.421930 32536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0331 18:03:56.482352 32536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0331 18:03:56.553221 32536 out.go:204] * Preparing Kubernetes v1.26.3 on Docker 20.10.23 ...
I0331 18:03:56.553294 32536 main.go:141] libmachine: (pause-939189) Calling .GetIP
I0331 18:03:56.556558 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:56.556972 32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
I0331 18:03:56.557002 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:03:56.557359 32536 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0331 18:03:56.561797 32536 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
I0331 18:03:56.561869 32536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0331 18:03:56.615248 32536 docker.go:639] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.26.3
registry.k8s.io/kube-controller-manager:v1.26.3
registry.k8s.io/kube-scheduler:v1.26.3
registry.k8s.io/kube-proxy:v1.26.3
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0331 18:03:56.615280 32536 docker.go:569] Images already preloaded, skipping extraction
I0331 18:03:56.615355 32536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0331 18:03:56.658918 32536 docker.go:639] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.26.3
registry.k8s.io/kube-controller-manager:v1.26.3
registry.k8s.io/kube-scheduler:v1.26.3
registry.k8s.io/kube-proxy:v1.26.3
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0331 18:03:56.658945 32536 cache_images.go:84] Images are preloaded, skipping loading
I0331 18:03:56.659011 32536 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0331 18:03:56.745659 32536 cni.go:84] Creating CNI manager for ""
I0331 18:03:56.745691 32536 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0331 18:03:56.745704 32536 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0331 18:03:56.745724 32536 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.142 APIServerPort:8443 KubernetesVersion:v1.26.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-939189 NodeName:pause-939189 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0331 18:03:56.745910 32536 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.142
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "pause-939189"
kubeletExtraArgs:
node-ip: 192.168.39.142
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.142"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0331 18:03:56.745991 32536 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=pause-939189 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.142
[Install]
config:
{KubernetesVersion:v1.26.3 ClusterName:pause-939189 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0331 18:03:56.746062 32536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.3
I0331 18:03:56.761126 32536 binaries.go:44] Found k8s binaries, skipping transfer
I0331 18:03:56.761203 32536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0331 18:03:56.780757 32536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
I0331 18:03:56.818141 32536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0331 18:03:56.854842 32536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
I0331 18:03:56.929502 32536 ssh_runner.go:195] Run: grep 192.168.39.142 control-plane.minikube.internal$ /etc/hosts
I0331 18:03:56.936882 32536 certs.go:56] Setting up /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189 for IP: 192.168.39.142
I0331 18:03:56.936926 32536 certs.go:186] acquiring lock for shared ca certs: {Name:mk5b2b979756b4a682c5be81dc53f006bb9a7a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0331 18:03:56.937093 32536 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16144-3494/.minikube/ca.key
I0331 18:03:56.937164 32536 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16144-3494/.minikube/proxy-client-ca.key
I0331 18:03:56.937292 32536 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/client.key
I0331 18:03:56.937377 32536 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/apiserver.key.4bb0a69b
I0331 18:03:56.937427 32536 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/proxy-client.key
I0331 18:03:56.937560 32536 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/10540.pem (1338 bytes)
W0331 18:03:56.937597 32536 certs.go:397] ignoring /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/10540_empty.pem, impossibly tiny 0 bytes
I0331 18:03:56.937611 32536 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca-key.pem (1675 bytes)
I0331 18:03:56.937646 32536 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca.pem (1078 bytes)
I0331 18:03:56.937677 32536 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/cert.pem (1123 bytes)
I0331 18:03:56.937706 32536 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/key.pem (1679 bytes)
I0331 18:03:56.937759 32536 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/ssl/certs/105402.pem (1708 bytes)
I0331 18:03:56.938525 32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0331 18:03:56.979110 32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0331 18:03:57.063983 32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0331 18:03:57.122723 32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0331 18:03:57.163289 32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0331 18:03:57.222469 32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0331 18:03:57.290463 32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0331 18:03:57.348893 32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0331 18:03:57.411932 32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/ssl/certs/105402.pem --> /usr/share/ca-certificates/105402.pem (1708 bytes)
I0331 18:03:57.482311 32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0331 18:03:57.559793 32536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/certs/10540.pem --> /usr/share/ca-certificates/10540.pem (1338 bytes)
I0331 18:03:57.611524 32536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0331 18:03:57.645170 32536 ssh_runner.go:195] Run: openssl version
I0331 18:03:57.654996 32536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0331 18:03:57.667216 32536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0331 18:03:57.672747 32536 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 31 17:21 /usr/share/ca-certificates/minikubeCA.pem
I0331 18:03:57.672820 32536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0331 18:03:57.680852 32536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0331 18:03:57.710339 32536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10540.pem && ln -fs /usr/share/ca-certificates/10540.pem /etc/ssl/certs/10540.pem"
I0331 18:03:57.723580 32536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10540.pem
I0331 18:03:57.734606 32536 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 31 17:26 /usr/share/ca-certificates/10540.pem
I0331 18:03:57.734679 32536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10540.pem
I0331 18:03:57.753538 32536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10540.pem /etc/ssl/certs/51391683.0"
I0331 18:03:57.770397 32536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105402.pem && ln -fs /usr/share/ca-certificates/105402.pem /etc/ssl/certs/105402.pem"
I0331 18:03:57.807350 32536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105402.pem
I0331 18:03:57.833625 32536 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 31 17:26 /usr/share/ca-certificates/105402.pem
I0331 18:03:57.833709 32536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105402.pem
I0331 18:03:57.848933 32536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105402.pem /etc/ssl/certs/3ec20f2e.0"
I0331 18:03:57.918200 32536 kubeadm.go:401] StartCluster: {Name:pause-939189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16191/minikube-v1.29.0-1680115329-16191-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:pause-9
39189 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:fa
lse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0331 18:03:57.918410 32536 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0331 18:03:58.044485 32536 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0331 18:03:58.075416 32536 kubeadm.go:416] found existing configuration files, will attempt cluster restart
I0331 18:03:58.075438 32536 kubeadm.go:633] restartCluster start
I0331 18:03:58.075500 32536 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0331 18:03:58.094934 32536 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0331 18:03:58.095861 32536 kubeconfig.go:92] found "pause-939189" server: "https://192.168.39.142:8443"
I0331 18:03:58.097153 32536 kapi.go:59] client config for pause-939189: &rest.Config{Host:"https://192.168.39.142:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/client.crt", KeyFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/client.key", CAFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x192bee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0331 18:03:58.098282 32536 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0331 18:03:58.138626 32536 api_server.go:165] Checking apiserver status ...
I0331 18:03:58.138701 32536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0331 18:03:58.165367 32536 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0331 18:03:58.665630 32536 api_server.go:165] Checking apiserver status ...
I0331 18:03:58.665725 32536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0331 18:03:58.688400 32536 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5686/cgroup
I0331 18:03:58.711356 32536 api_server.go:181] apiserver freezer: "11:freezer:/kubepods/burstable/podc554e721c1674f8d3807d01647788069/874fcc56f9f627f0ba77d60510e34b52a8bc53cc9dfd44bf1837048106c10090"
I0331 18:03:58.711430 32536 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podc554e721c1674f8d3807d01647788069/874fcc56f9f627f0ba77d60510e34b52a8bc53cc9dfd44bf1837048106c10090/freezer.state
I0331 18:03:58.729765 32536 api_server.go:203] freezer state: "THAWED"
I0331 18:03:58.729842 32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
I0331 18:04:03.730635 32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0331 18:04:03.730708 32536 retry.go:31] will retry after 264.872025ms: state is "Stopped"
I0331 18:04:03.996172 32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
I0331 18:04:08.997074 32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0331 18:04:08.997124 32536 retry.go:31] will retry after 349.636902ms: state is "Stopped"
I0331 18:04:09.347652 32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
I0331 18:04:14.348544 32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0331 18:04:14.348606 32536 api_server.go:165] Checking apiserver status ...
I0331 18:04:14.348662 32536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0331 18:04:14.373754 32536 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5686/cgroup
I0331 18:04:14.394637 32536 api_server.go:181] apiserver freezer: "11:freezer:/kubepods/burstable/podc554e721c1674f8d3807d01647788069/874fcc56f9f627f0ba77d60510e34b52a8bc53cc9dfd44bf1837048106c10090"
I0331 18:04:14.394719 32536 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podc554e721c1674f8d3807d01647788069/874fcc56f9f627f0ba77d60510e34b52a8bc53cc9dfd44bf1837048106c10090/freezer.state
I0331 18:04:14.407709 32536 api_server.go:203] freezer state: "THAWED"
I0331 18:04:14.407739 32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
I0331 18:04:19.408261 32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0331 18:04:19.408305 32536 retry.go:31] will retry after 215.218871ms: state is "Stopped"
I0331 18:04:19.624488 32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
I0331 18:04:19.625014 32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
I0331 18:04:19.625050 32536 retry.go:31] will retry after 293.483793ms: state is "Stopped"
I0331 18:04:19.919513 32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
I0331 18:04:19.920238 32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
I0331 18:04:19.920283 32536 retry.go:31] will retry after 486.512463ms: state is "Stopped"
I0331 18:04:20.407398 32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
I0331 18:04:20.408179 32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
I0331 18:04:20.408229 32536 retry.go:31] will retry after 404.6604ms: state is "Stopped"
I0331 18:04:20.813782 32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
I0331 18:04:20.814451 32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
I0331 18:04:20.814507 32536 retry.go:31] will retry after 641.020358ms: state is "Stopped"
I0331 18:04:21.456361 32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
I0331 18:04:21.457048 32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
I0331 18:04:21.457086 32536 retry.go:31] will retry after 754.462657ms: state is "Stopped"
I0331 18:04:22.211721 32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
I0331 18:04:22.212356 32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
I0331 18:04:22.212406 32536 retry.go:31] will retry after 1.115104449s: state is "Stopped"
I0331 18:04:23.328674 32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
I0331 18:04:23.329495 32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
I0331 18:04:23.329540 32536 retry.go:31] will retry after 1.24240954s: state is "Stopped"
I0331 18:04:24.572925 32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
I0331 18:04:24.573488 32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
I0331 18:04:24.573527 32536 retry.go:31] will retry after 1.380365448s: state is "Stopped"
I0331 18:04:25.954682 32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
I0331 18:04:25.955486 32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
I0331 18:04:25.955533 32536 retry.go:31] will retry after 1.543167733s: state is "Stopped"
I0331 18:04:27.499418 32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
I0331 18:04:27.500028 32536 api_server.go:268] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
I0331 18:04:27.500074 32536 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
I0331 18:04:27.500080 32536 kubeadm.go:1120] stopping kube-system containers ...
I0331 18:04:27.500125 32536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0331 18:04:27.535578 32536 docker.go:465] Stopping containers: [a0ad0a35a3e0 b4599f5bff86 9999f58d2765 b400c024f135 5e8b08d2a8f2 874fcc56f9f6 8ace7d6c4bee b034146fe7e8 6981b4d73a6c f5b35d44675c c447bce0c8ae 4045aa0f265a 69e745cdf53a 591b321a8a1e d2157bcefdc1 811553fdd488 34d917f20f26 5d91e467d3df d0750d4bcfa2 e65d5faade51 6bf6a130793f b9cb25554741]
I0331 18:04:27.535661 32536 ssh_runner.go:195] Run: docker stop a0ad0a35a3e0 b4599f5bff86 9999f58d2765 b400c024f135 5e8b08d2a8f2 874fcc56f9f6 8ace7d6c4bee b034146fe7e8 6981b4d73a6c f5b35d44675c c447bce0c8ae 4045aa0f265a 69e745cdf53a 591b321a8a1e d2157bcefdc1 811553fdd488 34d917f20f26 5d91e467d3df d0750d4bcfa2 e65d5faade51 6bf6a130793f b9cb25554741
I0331 18:04:32.786301 32536 ssh_runner.go:235] Completed: docker stop a0ad0a35a3e0 b4599f5bff86 9999f58d2765 b400c024f135 5e8b08d2a8f2 874fcc56f9f6 8ace7d6c4bee b034146fe7e8 6981b4d73a6c f5b35d44675c c447bce0c8ae 4045aa0f265a 69e745cdf53a 591b321a8a1e d2157bcefdc1 811553fdd488 34d917f20f26 5d91e467d3df d0750d4bcfa2 e65d5faade51 6bf6a130793f b9cb25554741: (5.25059671s)
I0331 18:04:32.786382 32536 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0331 18:04:32.827244 32536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0331 18:04:32.840888 32536 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5639 Mar 31 18:02 /etc/kubernetes/admin.conf
-rw------- 1 root root 5654 Mar 31 18:02 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1987 Mar 31 18:02 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5606 Mar 31 18:02 /etc/kubernetes/scheduler.conf
I0331 18:04:32.840952 32536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0331 18:04:32.851947 32536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0331 18:04:32.861389 32536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0331 18:04:32.870904 32536 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0331 18:04:32.870958 32536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0331 18:04:32.879952 32536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0331 18:04:32.890337 32536 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0331 18:04:32.890401 32536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0331 18:04:32.902509 32536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0331 18:04:32.912358 32536 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0331 18:04:32.912383 32536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0331 18:04:33.049768 32536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0331 18:04:34.137543 32536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.087742882s)
I0331 18:04:34.137570 32536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0331 18:04:34.364187 32536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0331 18:04:34.455055 32536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0331 18:04:34.561978 32536 api_server.go:51] waiting for apiserver process to appear ...
I0331 18:04:34.562036 32536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0331 18:04:34.580421 32536 api_server.go:71] duration metric: took 18.440625ms to wait for apiserver process to appear ...
I0331 18:04:34.580451 32536 api_server.go:87] waiting for apiserver healthz status ...
I0331 18:04:34.580460 32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
I0331 18:04:39.272172 32536 api_server.go:278] https://192.168.39.142:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0331 18:04:39.272207 32536 api_server.go:102] status: https://192.168.39.142:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0331 18:04:39.772935 32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
I0331 18:04:39.777965 32536 api_server.go:278] https://192.168.39.142:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0331 18:04:39.777990 32536 api_server.go:102] status: https://192.168.39.142:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0331 18:04:40.272400 32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
I0331 18:04:40.278072 32536 api_server.go:278] https://192.168.39.142:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0331 18:04:40.278095 32536 api_server.go:102] status: https://192.168.39.142:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0331 18:04:40.772913 32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
I0331 18:04:40.779114 32536 api_server.go:278] https://192.168.39.142:8443/healthz returned 200:
ok
I0331 18:04:40.795833 32536 api_server.go:140] control plane version: v1.26.3
I0331 18:04:40.795865 32536 api_server.go:130] duration metric: took 6.215408419s to wait for apiserver health ...
I0331 18:04:40.795876 32536 cni.go:84] Creating CNI manager for ""
I0331 18:04:40.795891 32536 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0331 18:04:40.797284 32536 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0331 18:04:40.798815 32536 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0331 18:04:40.826544 32536 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0331 18:04:40.864890 32536 system_pods.go:43] waiting for kube-system pods to appear ...
I0331 18:04:40.873818 32536 system_pods.go:59] 6 kube-system pods found
I0331 18:04:40.873850 32536 system_pods.go:61] "coredns-787d4945fb-hcrtc" [1e78e1f9-1a39-4c02-a4e9-51e5b268d077] Running
I0331 18:04:40.873858 32536 system_pods.go:61] "etcd-pause-939189" [cdc68c44-f3a4-4655-9818-48f074e8e376] Running
I0331 18:04:40.873864 32536 system_pods.go:61] "kube-apiserver-pause-939189" [c40b018d-97b2-4cdf-9edc-e1473d304c55] Running
I0331 18:04:40.873869 32536 system_pods.go:61] "kube-controller-manager-pause-939189" [69a62fcf-5db8-4354-aa08-ee5d2209a0ed] Running
I0331 18:04:40.873875 32536 system_pods.go:61] "kube-proxy-jg8p6" [dd3378f4-948b-4bec-abd3-ea9dc35d3259] Running
I0331 18:04:40.873881 32536 system_pods.go:61] "kube-scheduler-pause-939189" [b51eb2f5-8508-46f2-8c02-652ad1a69a1e] Running
I0331 18:04:40.873889 32536 system_pods.go:74] duration metric: took 8.977073ms to wait for pod list to return data ...
I0331 18:04:40.873899 32536 node_conditions.go:102] verifying NodePressure condition ...
I0331 18:04:40.878737 32536 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0331 18:04:40.878762 32536 node_conditions.go:123] node cpu capacity is 2
I0331 18:04:40.878773 32536 node_conditions.go:105] duration metric: took 4.86834ms to run NodePressure ...
I0331 18:04:40.878791 32536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0331 18:04:41.336529 32536 kubeadm.go:769] waiting for restarted kubelet to initialise ...
I0331 18:04:41.345471 32536 kubeadm.go:784] kubelet initialised
I0331 18:04:41.345500 32536 kubeadm.go:785] duration metric: took 8.940253ms waiting for restarted kubelet to initialise ...
I0331 18:04:41.345509 32536 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0331 18:04:41.351874 32536 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace to be "Ready" ...
I0331 18:04:43.384606 32536 pod_ready.go:102] pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace has status "Ready":"False"
I0331 18:04:45.387401 32536 pod_ready.go:102] pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace has status "Ready":"False"
I0331 18:04:46.401715 32536 pod_ready.go:92] pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:46.401752 32536 pod_ready.go:81] duration metric: took 5.049857013s waiting for pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace to be "Ready" ...
I0331 18:04:46.401766 32536 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:48.421495 32536 pod_ready.go:102] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"False"
I0331 18:04:50.765070 32536 pod_ready.go:102] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"False"
I0331 18:04:52.921656 32536 pod_ready.go:102] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"False"
I0331 18:04:53.421399 32536 pod_ready.go:92] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:53.421429 32536 pod_ready.go:81] duration metric: took 7.01965493s waiting for pod "etcd-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.421441 32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.429675 32536 pod_ready.go:92] pod "kube-apiserver-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:53.429697 32536 pod_ready.go:81] duration metric: took 8.249323ms waiting for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.429708 32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.438704 32536 pod_ready.go:92] pod "kube-controller-manager-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:53.438720 32536 pod_ready.go:81] duration metric: took 9.003572ms waiting for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.438731 32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.446519 32536 pod_ready.go:92] pod "kube-proxy-jg8p6" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:53.446534 32536 pod_ready.go:81] duration metric: took 7.795873ms waiting for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.446545 32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.451227 32536 pod_ready.go:92] pod "kube-scheduler-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:53.451242 32536 pod_ready.go:81] duration metric: took 4.691126ms waiting for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.451250 32536 pod_ready.go:38] duration metric: took 12.105730649s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0331 18:04:53.451272 32536 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0331 18:04:53.463906 32536 ops.go:34] apiserver oom_adj: -16
I0331 18:04:53.463925 32536 kubeadm.go:637] restartCluster took 55.388480099s
I0331 18:04:53.463933 32536 kubeadm.go:403] StartCluster complete in 55.545742823s
I0331 18:04:53.463952 32536 settings.go:142] acquiring lock: {Name:mk54cf97b6d1b5b12dec7aad9dd26d754e62bcd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0331 18:04:53.464032 32536 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/16144-3494/kubeconfig
I0331 18:04:53.464825 32536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/kubeconfig: {Name:mk0e63c10dbce63578041d9db05c951415a42011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0331 18:04:53.465096 32536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0331 18:04:53.465243 32536 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
I0331 18:04:53.465315 32536 config.go:182] Loaded profile config "pause-939189": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0331 18:04:53.465367 32536 cache.go:107] acquiring lock: {Name:mka2cf660dd4d542e74644eb9f55d9546287db85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0331 18:04:53.465432 32536 cache.go:115] /home/jenkins/minikube-integration/16144-3494/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
I0331 18:04:53.468377 32536 out.go:177] * Enabled addons:
I0331 18:04:53.465440 32536 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/16144-3494/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 77.875µs
I0331 18:04:53.465689 32536 kapi.go:59] client config for pause-939189: &rest.Config{Host:"https://192.168.39.142:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/client.crt", KeyFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/client.key", CAFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x192bee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0331 18:04:53.469869 32536 addons.go:499] enable addons completed in 4.62348ms: enabled=[]
I0331 18:04:53.469887 32536 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/16144-3494/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
I0331 18:04:53.469904 32536 cache.go:87] Successfully saved all images to host disk.
I0331 18:04:53.470079 32536 config.go:182] Loaded profile config "pause-939189": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0331 18:04:53.470390 32536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0331 18:04:53.470414 32536 main.go:141] libmachine: Launching plugin server for driver kvm2
I0331 18:04:53.472779 32536 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-939189" context rescaled to 1 replicas
I0331 18:04:53.472816 32536 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0331 18:04:53.474464 32536 out.go:177] * Verifying Kubernetes components...
I0331 18:04:53.475854 32536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0331 18:04:53.487310 32536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41011
I0331 18:04:53.487911 32536 main.go:141] libmachine: () Calling .GetVersion
I0331 18:04:53.488552 32536 main.go:141] libmachine: Using API Version 1
I0331 18:04:53.488581 32536 main.go:141] libmachine: () Calling .SetConfigRaw
I0331 18:04:53.488899 32536 main.go:141] libmachine: () Calling .GetMachineName
I0331 18:04:53.489075 32536 main.go:141] libmachine: (pause-939189) Calling .GetState
I0331 18:04:53.491520 32536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0331 18:04:53.491556 32536 main.go:141] libmachine: Launching plugin server for driver kvm2
I0331 18:04:53.508789 32536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
I0331 18:04:53.509289 32536 main.go:141] libmachine: () Calling .GetVersion
I0331 18:04:53.509835 32536 main.go:141] libmachine: Using API Version 1
I0331 18:04:53.509862 32536 main.go:141] libmachine: () Calling .SetConfigRaw
I0331 18:04:53.510320 32536 main.go:141] libmachine: () Calling .GetMachineName
I0331 18:04:53.510605 32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
I0331 18:04:53.510836 32536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0331 18:04:53.510866 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
I0331 18:04:53.514674 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:04:53.515275 32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
I0331 18:04:53.515296 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:04:53.515586 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
I0331 18:04:53.515793 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
I0331 18:04:53.515965 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
I0331 18:04:53.516121 32536 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16144-3494/.minikube/machines/pause-939189/id_rsa Username:docker}
I0331 18:04:53.632891 32536 node_ready.go:35] waiting up to 6m0s for node "pause-939189" to be "Ready" ...
I0331 18:04:53.633113 32536 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0331 18:04:53.637258 32536 node_ready.go:49] node "pause-939189" has status "Ready":"True"
I0331 18:04:53.637275 32536 node_ready.go:38] duration metric: took 4.35255ms waiting for node "pause-939189" to be "Ready" ...
I0331 18:04:53.637285 32536 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0331 18:04:53.668203 32536 docker.go:639] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.26.3
registry.k8s.io/kube-scheduler:v1.26.3
registry.k8s.io/kube-controller-manager:v1.26.3
registry.k8s.io/kube-proxy:v1.26.3
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0331 18:04:53.668226 32536 cache_images.go:84] Images are preloaded, skipping loading
I0331 18:04:53.668235 32536 cache_images.go:262] succeeded pushing to: pause-939189
I0331 18:04:53.668239 32536 cache_images.go:263] failed pushing to:
I0331 18:04:53.668267 32536 main.go:141] libmachine: Making call to close driver server
I0331 18:04:53.668284 32536 main.go:141] libmachine: (pause-939189) Calling .Close
I0331 18:04:53.668596 32536 main.go:141] libmachine: Successfully made call to close driver server
I0331 18:04:53.668613 32536 main.go:141] libmachine: Making call to close connection to plugin binary
I0331 18:04:53.668625 32536 main.go:141] libmachine: (pause-939189) DBG | Closing plugin on server side
I0331 18:04:53.668625 32536 main.go:141] libmachine: Making call to close driver server
I0331 18:04:53.668641 32536 main.go:141] libmachine: (pause-939189) Calling .Close
I0331 18:04:53.668916 32536 main.go:141] libmachine: (pause-939189) DBG | Closing plugin on server side
I0331 18:04:53.668922 32536 main.go:141] libmachine: Successfully made call to close driver server
I0331 18:04:53.668942 32536 main.go:141] libmachine: Making call to close connection to plugin binary
I0331 18:04:53.821124 32536 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace to be "Ready" ...
I0331 18:04:54.218332 32536 pod_ready.go:92] pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:54.218358 32536 pod_ready.go:81] duration metric: took 397.210316ms waiting for pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace to be "Ready" ...
I0331 18:04:54.218367 32536 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:54.618607 32536 pod_ready.go:92] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:54.618631 32536 pod_ready.go:81] duration metric: took 400.255347ms waiting for pod "etcd-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:54.618640 32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:55.019356 32536 pod_ready.go:92] pod "kube-apiserver-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:55.019378 32536 pod_ready.go:81] duration metric: took 400.731414ms waiting for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:55.019393 32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:55.420085 32536 pod_ready.go:92] pod "kube-controller-manager-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:55.420114 32536 pod_ready.go:81] duration metric: took 400.711919ms waiting for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:55.420130 32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
I0331 18:04:55.819685 32536 pod_ready.go:92] pod "kube-proxy-jg8p6" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:55.819705 32536 pod_ready.go:81] duration metric: took 399.567435ms waiting for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
I0331 18:04:55.819719 32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:56.219488 32536 pod_ready.go:92] pod "kube-scheduler-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:56.219513 32536 pod_ready.go:81] duration metric: took 399.783789ms waiting for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:56.219524 32536 pod_ready.go:38] duration metric: took 2.582225755s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0331 18:04:56.219550 32536 api_server.go:51] waiting for apiserver process to appear ...
I0331 18:04:56.219595 32536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0331 18:04:56.240919 32536 api_server.go:71] duration metric: took 2.768070005s to wait for apiserver process to appear ...
I0331 18:04:56.240947 32536 api_server.go:87] waiting for apiserver healthz status ...
I0331 18:04:56.240961 32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
I0331 18:04:56.247401 32536 api_server.go:278] https://192.168.39.142:8443/healthz returned 200:
ok
I0331 18:04:56.248689 32536 api_server.go:140] control plane version: v1.26.3
I0331 18:04:56.248709 32536 api_server.go:130] duration metric: took 7.754551ms to wait for apiserver health ...
I0331 18:04:56.248718 32536 system_pods.go:43] waiting for kube-system pods to appear ...
I0331 18:04:56.422125 32536 system_pods.go:59] 6 kube-system pods found
I0331 18:04:56.422151 32536 system_pods.go:61] "coredns-787d4945fb-hcrtc" [1e78e1f9-1a39-4c02-a4e9-51e5b268d077] Running
I0331 18:04:56.422159 32536 system_pods.go:61] "etcd-pause-939189" [cdc68c44-f3a4-4655-9818-48f074e8e376] Running
I0331 18:04:56.422166 32536 system_pods.go:61] "kube-apiserver-pause-939189" [c40b018d-97b2-4cdf-9edc-e1473d304c55] Running
I0331 18:04:56.422174 32536 system_pods.go:61] "kube-controller-manager-pause-939189" [69a62fcf-5db8-4354-aa08-ee5d2209a0ed] Running
I0331 18:04:56.422181 32536 system_pods.go:61] "kube-proxy-jg8p6" [dd3378f4-948b-4bec-abd3-ea9dc35d3259] Running
I0331 18:04:56.422187 32536 system_pods.go:61] "kube-scheduler-pause-939189" [b51eb2f5-8508-46f2-8c02-652ad1a69a1e] Running
I0331 18:04:56.422193 32536 system_pods.go:74] duration metric: took 173.469145ms to wait for pod list to return data ...
I0331 18:04:56.422202 32536 default_sa.go:34] waiting for default service account to be created ...
I0331 18:04:56.618165 32536 default_sa.go:45] found service account: "default"
I0331 18:04:56.618190 32536 default_sa.go:55] duration metric: took 195.978567ms for default service account to be created ...
I0331 18:04:56.618200 32536 system_pods.go:116] waiting for k8s-apps to be running ...
I0331 18:04:56.823045 32536 system_pods.go:86] 6 kube-system pods found
I0331 18:04:56.823082 32536 system_pods.go:89] "coredns-787d4945fb-hcrtc" [1e78e1f9-1a39-4c02-a4e9-51e5b268d077] Running
I0331 18:04:56.823092 32536 system_pods.go:89] "etcd-pause-939189" [cdc68c44-f3a4-4655-9818-48f074e8e376] Running
I0331 18:04:56.823099 32536 system_pods.go:89] "kube-apiserver-pause-939189" [c40b018d-97b2-4cdf-9edc-e1473d304c55] Running
I0331 18:04:56.823107 32536 system_pods.go:89] "kube-controller-manager-pause-939189" [69a62fcf-5db8-4354-aa08-ee5d2209a0ed] Running
I0331 18:04:56.823113 32536 system_pods.go:89] "kube-proxy-jg8p6" [dd3378f4-948b-4bec-abd3-ea9dc35d3259] Running
I0331 18:04:56.823120 32536 system_pods.go:89] "kube-scheduler-pause-939189" [b51eb2f5-8508-46f2-8c02-652ad1a69a1e] Running
I0331 18:04:56.823129 32536 system_pods.go:126] duration metric: took 204.923041ms to wait for k8s-apps to be running ...
I0331 18:04:56.823144 32536 system_svc.go:44] waiting for kubelet service to be running ....
I0331 18:04:56.823194 32536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0331 18:04:56.843108 32536 system_svc.go:56] duration metric: took 19.952106ms WaitForService to wait for kubelet.
I0331 18:04:56.843157 32536 kubeadm.go:578] duration metric: took 3.370313636s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0331 18:04:56.843181 32536 node_conditions.go:102] verifying NodePressure condition ...
I0331 18:04:57.019150 32536 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0331 18:04:57.019178 32536 node_conditions.go:123] node cpu capacity is 2
I0331 18:04:57.019188 32536 node_conditions.go:105] duration metric: took 176.00176ms to run NodePressure ...
I0331 18:04:57.019201 32536 start.go:228] waiting for startup goroutines ...
I0331 18:04:57.019209 32536 start.go:233] waiting for cluster config update ...
I0331 18:04:57.019219 32536 start.go:242] writing updated cluster config ...
I0331 18:04:57.019587 32536 ssh_runner.go:195] Run: rm -f paused
I0331 18:04:57.094738 32536 start.go:557] kubectl: 1.26.3, cluster: 1.26.3 (minor skew: 0)
I0331 18:04:57.097707 32536 out.go:177] * Done! kubectl is now configured to use "pause-939189" cluster and "default" namespace by default
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-939189 -n pause-939189
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p pause-939189 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-939189 logs -n 25: (1.243080205s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |---------|------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| start | -p stopped-upgrade-202435 | stopped-upgrade-202435 | jenkins | v1.29.0 | 31 Mar 23 18:00 UTC | 31 Mar 23 18:02 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=kvm2 | | | | | |
| start | -p kubernetes-upgrade-075589 | kubernetes-upgrade-075589 | jenkins | v1.29.0 | 31 Mar 23 18:01 UTC | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --driver=kvm2 | | | | | |
| start | -p kubernetes-upgrade-075589 | kubernetes-upgrade-075589 | jenkins | v1.29.0 | 31 Mar 23 18:01 UTC | 31 Mar 23 18:02 UTC |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.27.0-rc.0 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=kvm2 | | | | | |
| delete | -p cert-expiration-549601 | cert-expiration-549601 | jenkins | v1.29.0 | 31 Mar 23 18:01 UTC | 31 Mar 23 18:01 UTC |
| start | -p pause-939189 --memory=2048 | pause-939189 | jenkins | v1.29.0 | 31 Mar 23 18:01 UTC | 31 Mar 23 18:03 UTC |
| | --install-addons=false | | | | | |
| | --wait=all --driver=kvm2 | | | | | |
| cache | gvisor-836132 cache add | gvisor-836132 | jenkins | v1.29.0 | 31 Mar 23 18:01 UTC | 31 Mar 23 18:02 UTC |
| | gcr.io/k8s-minikube/gvisor-addon:2 | | | | | |
| addons | gvisor-836132 addons enable | gvisor-836132 | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:02 UTC |
| | gvisor | | | | | |
| delete | -p stopped-upgrade-202435 | stopped-upgrade-202435 | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:02 UTC |
| start | -p force-systemd-env-066234 | force-systemd-env-066234 | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:03 UTC |
| | --memory=2048 | | | | | |
| | --alsologtostderr -v=5 | | | | | |
| | --driver=kvm2 | | | | | |
| delete | -p kubernetes-upgrade-075589 | kubernetes-upgrade-075589 | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:02 UTC |
| start | -p cert-options-885841 | cert-options-885841 | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:04 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=kvm2 | | | | | |
| stop | -p gvisor-836132 | gvisor-836132 | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:04 UTC |
| start | -p pause-939189 | pause-939189 | jenkins | v1.29.0 | 31 Mar 23 18:03 UTC | 31 Mar 23 18:04 UTC |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=kvm2 | | | | | |
| ssh | force-systemd-env-066234 | force-systemd-env-066234 | jenkins | v1.29.0 | 31 Mar 23 18:03 UTC | 31 Mar 23 18:03 UTC |
| | ssh docker info --format | | | | | |
| | {{.CgroupDriver}} | | | | | |
| delete | -p force-systemd-env-066234 | force-systemd-env-066234 | jenkins | v1.29.0 | 31 Mar 23 18:03 UTC | 31 Mar 23 18:03 UTC |
| start | -p NoKubernetes-746317 | NoKubernetes-746317 | jenkins | v1.29.0 | 31 Mar 23 18:03 UTC | |
| | --no-kubernetes | | | | | |
| | --kubernetes-version=1.20 | | | | | |
| | --driver=kvm2 | | | | | |
| start | -p NoKubernetes-746317 | NoKubernetes-746317 | jenkins | v1.29.0 | 31 Mar 23 18:03 UTC | |
| | --driver=kvm2 | | | | | |
| ssh | cert-options-885841 ssh | cert-options-885841 | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | 31 Mar 23 18:04 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-885841 -- sudo | cert-options-885841 | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | 31 Mar 23 18:04 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-885841 | cert-options-885841 | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | 31 Mar 23 18:04 UTC |
| start | -p auto-347180 --memory=3072 | auto-347180 | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | |
| | --alsologtostderr --wait=true | | | | | |
| | --wait-timeout=15m | | | | | |
| | --driver=kvm2 | | | | | |
| start | -p NoKubernetes-746317 | NoKubernetes-746317 | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | 31 Mar 23 18:04 UTC |
| | --no-kubernetes --driver=kvm2 | | | | | |
| start | -p gvisor-836132 --memory=2200 | gvisor-836132 | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | |
| | --container-runtime=containerd --docker-opt | | | | | |
| | containerd=/var/run/containerd/containerd.sock | | | | | |
| | --driver=kvm2 | | | | | |
| delete | -p NoKubernetes-746317 | NoKubernetes-746317 | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | 31 Mar 23 18:04 UTC |
| start | -p NoKubernetes-746317 | NoKubernetes-746317 | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | |
| | --no-kubernetes --driver=kvm2 | | | | | |
|---------|------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/03/31 18:04:52
Running on machine: ubuntu-20-agent-3
Binary: Built with gc go1.20.2 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0331 18:04:52.112989 33820 out.go:296] Setting OutFile to fd 1 ...
I0331 18:04:52.113170 33820 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0331 18:04:52.113174 33820 out.go:309] Setting ErrFile to fd 2...
I0331 18:04:52.113180 33820 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0331 18:04:52.113343 33820 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16144-3494/.minikube/bin
I0331 18:04:52.114025 33820 out.go:303] Setting JSON to false
I0331 18:04:52.115095 33820 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2843,"bootTime":1680283049,"procs":255,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1031-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0331 18:04:52.115161 33820 start.go:135] virtualization: kvm guest
I0331 18:04:52.202763 33820 out.go:177] * [NoKubernetes-746317] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0331 18:04:52.295981 33820 out.go:177] - MINIKUBE_LOCATION=16144
I0331 18:04:52.295891 33820 notify.go:220] Checking for updates...
I0331 18:04:52.419505 33820 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0331 18:04:52.544450 33820 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/16144-3494/kubeconfig
I0331 18:04:52.604388 33820 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/16144-3494/.minikube
I0331 18:04:52.606360 33820 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0331 18:04:52.608233 33820 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0331 18:04:52.610384 33820 config.go:182] Loaded profile config "auto-347180": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0331 18:04:52.610538 33820 config.go:182] Loaded profile config "gvisor-836132": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.3
I0331 18:04:52.610724 33820 config.go:182] Loaded profile config "pause-939189": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0331 18:04:52.610745 33820 start.go:1732] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
I0331 18:04:52.610778 33820 driver.go:365] Setting default libvirt URI to qemu:///system
I0331 18:04:52.649175 33820 out.go:177] * Using the kvm2 driver based on user configuration
I0331 18:04:52.650741 33820 start.go:295] selected driver: kvm2
I0331 18:04:52.650750 33820 start.go:859] validating driver "kvm2" against <nil>
I0331 18:04:52.650762 33820 start.go:870] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0331 18:04:52.651120 33820 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0331 18:04:52.651207 33820 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16144-3494/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0331 18:04:52.665942 33820 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
I0331 18:04:52.665977 33820 start.go:1732] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
I0331 18:04:52.665987 33820 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0331 18:04:52.666616 33820 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32101MB, container=0MB
I0331 18:04:52.666788 33820 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
I0331 18:04:52.666808 33820 cni.go:84] Creating CNI manager for ""
I0331 18:04:52.666818 33820 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0331 18:04:52.666825 33820 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0331 18:04:52.666832 33820 start_flags.go:319] config:
{Name:NoKubernetes-746317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:NoKubernetes-746317 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0331 18:04:52.666906 33820 start.go:1732] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
I0331 18:04:52.666977 33820 iso.go:125] acquiring lock: {Name:mk48583bcdf05c8e72651ed56790356a32c028b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0331 18:04:52.669123 33820 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-746317
I0331 18:04:48.155281 33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
I0331 18:04:48.155871 33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
I0331 18:04:48.155896 33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:48.155813 33603 retry.go:31] will retry after 283.128145ms: waiting for machine to come up
I0331 18:04:48.440401 33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
I0331 18:04:48.440902 33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
I0331 18:04:48.440924 33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:48.440860 33603 retry.go:31] will retry after 410.682274ms: waiting for machine to come up
I0331 18:04:48.853565 33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
I0331 18:04:48.854037 33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
I0331 18:04:48.854052 33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:48.854000 33603 retry.go:31] will retry after 497.486632ms: waiting for machine to come up
I0331 18:04:49.353711 33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
I0331 18:04:49.354221 33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
I0331 18:04:49.354243 33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:49.354178 33603 retry.go:31] will retry after 611.052328ms: waiting for machine to come up
I0331 18:04:49.967240 33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
I0331 18:04:50.040539 33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
I0331 18:04:50.040577 33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:50.040409 33603 retry.go:31] will retry after 763.986572ms: waiting for machine to come up
I0331 18:04:50.876927 33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
I0331 18:04:50.877366 33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
I0331 18:04:50.877457 33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:50.877308 33603 retry.go:31] will retry after 955.134484ms: waiting for machine to come up
I0331 18:04:51.834716 33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
I0331 18:04:51.835256 33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
I0331 18:04:51.835316 33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:51.835243 33603 retry.go:31] will retry after 1.216587491s: waiting for machine to come up
I0331 18:04:53.053498 33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
I0331 18:04:53.054031 33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
I0331 18:04:53.054059 33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:53.053989 33603 retry.go:31] will retry after 1.334972483s: waiting for machine to come up
I0331 18:04:50.765070 32536 pod_ready.go:102] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"False"
I0331 18:04:52.921656 32536 pod_ready.go:102] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"False"
I0331 18:04:53.421399 32536 pod_ready.go:92] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:53.421429 32536 pod_ready.go:81] duration metric: took 7.01965493s waiting for pod "etcd-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.421441 32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.429675 32536 pod_ready.go:92] pod "kube-apiserver-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:53.429697 32536 pod_ready.go:81] duration metric: took 8.249323ms waiting for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.429708 32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.438704 32536 pod_ready.go:92] pod "kube-controller-manager-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:53.438720 32536 pod_ready.go:81] duration metric: took 9.003572ms waiting for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.438731 32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.446519 32536 pod_ready.go:92] pod "kube-proxy-jg8p6" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:53.446534 32536 pod_ready.go:81] duration metric: took 7.795873ms waiting for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.446545 32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.451227 32536 pod_ready.go:92] pod "kube-scheduler-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:53.451242 32536 pod_ready.go:81] duration metric: took 4.691126ms waiting for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.451250 32536 pod_ready.go:38] duration metric: took 12.105730649s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0331 18:04:53.451272 32536 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0331 18:04:53.463906 32536 ops.go:34] apiserver oom_adj: -16
I0331 18:04:53.463925 32536 kubeadm.go:637] restartCluster took 55.388480099s
I0331 18:04:53.463933 32536 kubeadm.go:403] StartCluster complete in 55.545742823s
I0331 18:04:53.463952 32536 settings.go:142] acquiring lock: {Name:mk54cf97b6d1b5b12dec7aad9dd26d754e62bcd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0331 18:04:53.464032 32536 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/16144-3494/kubeconfig
I0331 18:04:53.464825 32536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/kubeconfig: {Name:mk0e63c10dbce63578041d9db05c951415a42011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0331 18:04:53.465096 32536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0331 18:04:53.465243 32536 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
I0331 18:04:53.465315 32536 config.go:182] Loaded profile config "pause-939189": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0331 18:04:53.465367 32536 cache.go:107] acquiring lock: {Name:mka2cf660dd4d542e74644eb9f55d9546287db85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0331 18:04:53.465432 32536 cache.go:115] /home/jenkins/minikube-integration/16144-3494/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
I0331 18:04:53.468377 32536 out.go:177] * Enabled addons:
I0331 18:04:53.465440 32536 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/16144-3494/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 77.875µs
I0331 18:04:53.465689 32536 kapi.go:59] client config for pause-939189: &rest.Config{Host:"https://192.168.39.142:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/client.crt", KeyFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/client.key", CAFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x192bee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0331 18:04:53.469869 32536 addons.go:499] enable addons completed in 4.62348ms: enabled=[]
I0331 18:04:53.469887 32536 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/16144-3494/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
I0331 18:04:53.469904 32536 cache.go:87] Successfully saved all images to host disk.
I0331 18:04:53.470079 32536 config.go:182] Loaded profile config "pause-939189": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0331 18:04:53.470390 32536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0331 18:04:53.470414 32536 main.go:141] libmachine: Launching plugin server for driver kvm2
I0331 18:04:53.472779 32536 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-939189" context rescaled to 1 replicas
I0331 18:04:53.472816 32536 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0331 18:04:53.474464 32536 out.go:177] * Verifying Kubernetes components...
I0331 18:04:49.689822 33276 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.064390662s)
I0331 18:04:49.689845 33276 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0331 18:04:49.730226 33276 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0331 18:04:49.740534 33276 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
I0331 18:04:49.759896 33276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0331 18:04:49.892044 33276 ssh_runner.go:195] Run: sudo systemctl restart docker
I0331 18:04:52.833806 33276 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.941720773s)
I0331 18:04:52.833863 33276 start.go:481] detecting cgroup driver to use...
I0331 18:04:52.833984 33276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0331 18:04:52.856132 33276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0331 18:04:52.867005 33276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0331 18:04:52.875838 33276 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0331 18:04:52.875899 33276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0331 18:04:52.885209 33276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0331 18:04:52.895294 33276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0331 18:04:52.906080 33276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0331 18:04:52.916021 33276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0331 18:04:52.927401 33276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0331 18:04:52.936940 33276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0331 18:04:52.945127 33276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0331 18:04:52.953052 33276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0331 18:04:53.053440 33276 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0331 18:04:53.071425 33276 start.go:481] detecting cgroup driver to use...
I0331 18:04:53.071501 33276 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0331 18:04:53.090019 33276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0331 18:04:53.104446 33276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0331 18:04:53.123957 33276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0331 18:04:53.139648 33276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0331 18:04:53.155612 33276 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0331 18:04:53.186101 33276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0331 18:04:53.202708 33276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0331 18:04:53.222722 33276 ssh_runner.go:195] Run: which cri-dockerd
I0331 18:04:53.227094 33276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0331 18:04:53.236406 33276 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0331 18:04:53.252225 33276 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0331 18:04:53.363704 33276 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0331 18:04:53.479794 33276 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
I0331 18:04:53.479826 33276 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0331 18:04:53.502900 33276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0331 18:04:53.633618 33276 ssh_runner.go:195] Run: sudo systemctl restart docker
I0331 18:04:53.475854 32536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0331 18:04:53.487310 32536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41011
I0331 18:04:53.487911 32536 main.go:141] libmachine: () Calling .GetVersion
I0331 18:04:53.488552 32536 main.go:141] libmachine: Using API Version 1
I0331 18:04:53.488581 32536 main.go:141] libmachine: () Calling .SetConfigRaw
I0331 18:04:53.488899 32536 main.go:141] libmachine: () Calling .GetMachineName
I0331 18:04:53.489075 32536 main.go:141] libmachine: (pause-939189) Calling .GetState
I0331 18:04:53.491520 32536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0331 18:04:53.491556 32536 main.go:141] libmachine: Launching plugin server for driver kvm2
I0331 18:04:53.508789 32536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
I0331 18:04:53.509289 32536 main.go:141] libmachine: () Calling .GetVersion
I0331 18:04:53.509835 32536 main.go:141] libmachine: Using API Version 1
I0331 18:04:53.509862 32536 main.go:141] libmachine: () Calling .SetConfigRaw
I0331 18:04:53.510320 32536 main.go:141] libmachine: () Calling .GetMachineName
I0331 18:04:53.510605 32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
I0331 18:04:53.510836 32536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0331 18:04:53.510866 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
I0331 18:04:53.514674 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:04:53.515275 32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
I0331 18:04:53.515296 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:04:53.515586 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
I0331 18:04:53.515793 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
I0331 18:04:53.515965 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
I0331 18:04:53.516121 32536 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16144-3494/.minikube/machines/pause-939189/id_rsa Username:docker}
I0331 18:04:53.632891 32536 node_ready.go:35] waiting up to 6m0s for node "pause-939189" to be "Ready" ...
I0331 18:04:53.633113 32536 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0331 18:04:53.637258 32536 node_ready.go:49] node "pause-939189" has status "Ready":"True"
I0331 18:04:53.637275 32536 node_ready.go:38] duration metric: took 4.35255ms waiting for node "pause-939189" to be "Ready" ...
I0331 18:04:53.637285 32536 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0331 18:04:53.668203 32536 docker.go:639] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.26.3
registry.k8s.io/kube-scheduler:v1.26.3
registry.k8s.io/kube-controller-manager:v1.26.3
registry.k8s.io/kube-proxy:v1.26.3
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0331 18:04:53.668226 32536 cache_images.go:84] Images are preloaded, skipping loading
I0331 18:04:53.668235 32536 cache_images.go:262] succeeded pushing to: pause-939189
I0331 18:04:53.668239 32536 cache_images.go:263] failed pushing to:
I0331 18:04:53.668267 32536 main.go:141] libmachine: Making call to close driver server
I0331 18:04:53.668284 32536 main.go:141] libmachine: (pause-939189) Calling .Close
I0331 18:04:53.668596 32536 main.go:141] libmachine: Successfully made call to close driver server
I0331 18:04:53.668613 32536 main.go:141] libmachine: Making call to close connection to plugin binary
I0331 18:04:53.668625 32536 main.go:141] libmachine: (pause-939189) DBG | Closing plugin on server side
I0331 18:04:53.668625 32536 main.go:141] libmachine: Making call to close driver server
I0331 18:04:53.668641 32536 main.go:141] libmachine: (pause-939189) Calling .Close
I0331 18:04:53.668916 32536 main.go:141] libmachine: (pause-939189) DBG | Closing plugin on server side
I0331 18:04:53.668922 32536 main.go:141] libmachine: Successfully made call to close driver server
I0331 18:04:53.668942 32536 main.go:141] libmachine: Making call to close connection to plugin binary
I0331 18:04:53.821124 32536 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace to be "Ready" ...
I0331 18:04:54.218332 32536 pod_ready.go:92] pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:54.218358 32536 pod_ready.go:81] duration metric: took 397.210316ms waiting for pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace to be "Ready" ...
I0331 18:04:54.218367 32536 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:54.618607 32536 pod_ready.go:92] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:54.618631 32536 pod_ready.go:81] duration metric: took 400.255347ms waiting for pod "etcd-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:54.618640 32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:55.019356 32536 pod_ready.go:92] pod "kube-apiserver-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:55.019378 32536 pod_ready.go:81] duration metric: took 400.731414ms waiting for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:55.019393 32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:55.420085 32536 pod_ready.go:92] pod "kube-controller-manager-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:55.420114 32536 pod_ready.go:81] duration metric: took 400.711919ms waiting for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:55.420130 32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
I0331 18:04:55.015443 33276 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.381792307s)
I0331 18:04:55.015525 33276 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0331 18:04:55.133415 33276 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0331 18:04:55.243506 33276 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0331 18:04:55.356452 33276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0331 18:04:55.477055 33276 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0331 18:04:55.493533 33276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0331 18:04:55.611643 33276 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I0331 18:04:55.707141 33276 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0331 18:04:55.707200 33276 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0331 18:04:55.713403 33276 start.go:549] Will wait 60s for crictl version
I0331 18:04:55.713474 33276 ssh_runner.go:195] Run: which crictl
I0331 18:04:55.718338 33276 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0331 18:04:55.774128 33276 start.go:565] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.23
RuntimeApiVersion: v1alpha2
I0331 18:04:55.774203 33276 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0331 18:04:55.810277 33276 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0331 18:04:55.819685 32536 pod_ready.go:92] pod "kube-proxy-jg8p6" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:55.819705 32536 pod_ready.go:81] duration metric: took 399.567435ms waiting for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
I0331 18:04:55.819719 32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:56.219488 32536 pod_ready.go:92] pod "kube-scheduler-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:56.219513 32536 pod_ready.go:81] duration metric: took 399.783789ms waiting for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:56.219524 32536 pod_ready.go:38] duration metric: took 2.582225755s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0331 18:04:56.219550 32536 api_server.go:51] waiting for apiserver process to appear ...
I0331 18:04:56.219595 32536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0331 18:04:56.240919 32536 api_server.go:71] duration metric: took 2.768070005s to wait for apiserver process to appear ...
I0331 18:04:56.240947 32536 api_server.go:87] waiting for apiserver healthz status ...
I0331 18:04:56.240961 32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
I0331 18:04:56.247401 32536 api_server.go:278] https://192.168.39.142:8443/healthz returned 200:
ok
I0331 18:04:56.248689 32536 api_server.go:140] control plane version: v1.26.3
I0331 18:04:56.248709 32536 api_server.go:130] duration metric: took 7.754551ms to wait for apiserver health ...
I0331 18:04:56.248718 32536 system_pods.go:43] waiting for kube-system pods to appear ...
I0331 18:04:56.422125 32536 system_pods.go:59] 6 kube-system pods found
I0331 18:04:56.422151 32536 system_pods.go:61] "coredns-787d4945fb-hcrtc" [1e78e1f9-1a39-4c02-a4e9-51e5b268d077] Running
I0331 18:04:56.422159 32536 system_pods.go:61] "etcd-pause-939189" [cdc68c44-f3a4-4655-9818-48f074e8e376] Running
I0331 18:04:56.422166 32536 system_pods.go:61] "kube-apiserver-pause-939189" [c40b018d-97b2-4cdf-9edc-e1473d304c55] Running
I0331 18:04:56.422174 32536 system_pods.go:61] "kube-controller-manager-pause-939189" [69a62fcf-5db8-4354-aa08-ee5d2209a0ed] Running
I0331 18:04:56.422181 32536 system_pods.go:61] "kube-proxy-jg8p6" [dd3378f4-948b-4bec-abd3-ea9dc35d3259] Running
I0331 18:04:56.422187 32536 system_pods.go:61] "kube-scheduler-pause-939189" [b51eb2f5-8508-46f2-8c02-652ad1a69a1e] Running
I0331 18:04:56.422193 32536 system_pods.go:74] duration metric: took 173.469145ms to wait for pod list to return data ...
I0331 18:04:56.422202 32536 default_sa.go:34] waiting for default service account to be created ...
I0331 18:04:56.618165 32536 default_sa.go:45] found service account: "default"
I0331 18:04:56.618190 32536 default_sa.go:55] duration metric: took 195.978567ms for default service account to be created ...
I0331 18:04:56.618200 32536 system_pods.go:116] waiting for k8s-apps to be running ...
I0331 18:04:56.823045 32536 system_pods.go:86] 6 kube-system pods found
I0331 18:04:56.823082 32536 system_pods.go:89] "coredns-787d4945fb-hcrtc" [1e78e1f9-1a39-4c02-a4e9-51e5b268d077] Running
I0331 18:04:56.823092 32536 system_pods.go:89] "etcd-pause-939189" [cdc68c44-f3a4-4655-9818-48f074e8e376] Running
I0331 18:04:56.823099 32536 system_pods.go:89] "kube-apiserver-pause-939189" [c40b018d-97b2-4cdf-9edc-e1473d304c55] Running
I0331 18:04:56.823107 32536 system_pods.go:89] "kube-controller-manager-pause-939189" [69a62fcf-5db8-4354-aa08-ee5d2209a0ed] Running
I0331 18:04:56.823113 32536 system_pods.go:89] "kube-proxy-jg8p6" [dd3378f4-948b-4bec-abd3-ea9dc35d3259] Running
I0331 18:04:56.823120 32536 system_pods.go:89] "kube-scheduler-pause-939189" [b51eb2f5-8508-46f2-8c02-652ad1a69a1e] Running
I0331 18:04:56.823129 32536 system_pods.go:126] duration metric: took 204.923041ms to wait for k8s-apps to be running ...
I0331 18:04:56.823144 32536 system_svc.go:44] waiting for kubelet service to be running ....
I0331 18:04:56.823194 32536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0331 18:04:56.843108 32536 system_svc.go:56] duration metric: took 19.952106ms WaitForService to wait for kubelet.
I0331 18:04:56.843157 32536 kubeadm.go:578] duration metric: took 3.370313636s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0331 18:04:56.843181 32536 node_conditions.go:102] verifying NodePressure condition ...
I0331 18:04:57.019150 32536 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0331 18:04:57.019178 32536 node_conditions.go:123] node cpu capacity is 2
I0331 18:04:57.019188 32536 node_conditions.go:105] duration metric: took 176.00176ms to run NodePressure ...
I0331 18:04:57.019201 32536 start.go:228] waiting for startup goroutines ...
I0331 18:04:57.019209 32536 start.go:233] waiting for cluster config update ...
I0331 18:04:57.019219 32536 start.go:242] writing updated cluster config ...
I0331 18:04:57.019587 32536 ssh_runner.go:195] Run: rm -f paused
I0331 18:04:57.094738 32536 start.go:557] kubectl: 1.26.3, cluster: 1.26.3 (minor skew: 0)
I0331 18:04:57.097707 32536 out.go:177] * Done! kubectl is now configured to use "pause-939189" cluster and "default" namespace by default
I0331 18:04:52.670594 33820 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime docker
W0331 18:04:52.706864 33820 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-docker-overlay2-amd64.tar.lz4 status code: 404
I0331 18:04:52.707029 33820 profile.go:148] Saving config to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/NoKubernetes-746317/config.json ...
I0331 18:04:52.707063 33820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/NoKubernetes-746317/config.json: {Name:mkc819cfb6c45ebbebd0d82f4a0be54fd6cd98e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0331 18:04:52.707228 33820 cache.go:193] Successfully downloaded all kic artifacts
I0331 18:04:52.707251 33820 start.go:364] acquiring machines lock for NoKubernetes-746317: {Name:mkfdc5208de17d93700ea90324b4f36214eab469 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0331 18:04:55.847800 33276 out.go:204] * Preparing Kubernetes v1.26.3 on Docker 20.10.23 ...
I0331 18:04:55.847864 33276 main.go:141] libmachine: (auto-347180) Calling .GetIP
I0331 18:04:55.850787 33276 main.go:141] libmachine: (auto-347180) DBG | domain auto-347180 has defined MAC address 52:54:00:61:01:e7 in network mk-auto-347180
I0331 18:04:55.851207 33276 main.go:141] libmachine: (auto-347180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:01:e7", ip: ""} in network mk-auto-347180: {Iface:virbr3 ExpiryTime:2023-03-31 19:04:35 +0000 UTC Type:0 Mac:52:54:00:61:01:e7 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:auto-347180 Clientid:01:52:54:00:61:01:e7}
I0331 18:04:55.851239 33276 main.go:141] libmachine: (auto-347180) DBG | domain auto-347180 has defined IP address 192.168.72.199 and MAC address 52:54:00:61:01:e7 in network mk-auto-347180
I0331 18:04:55.851415 33276 ssh_runner.go:195] Run: grep 192.168.72.1 host.minikube.internal$ /etc/hosts
I0331 18:04:55.855857 33276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0331 18:04:55.868328 33276 localpath.go:92] copying /home/jenkins/minikube-integration/16144-3494/.minikube/client.crt -> /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.crt
I0331 18:04:55.868487 33276 localpath.go:117] copying /home/jenkins/minikube-integration/16144-3494/.minikube/client.key -> /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.key
I0331 18:04:55.868617 33276 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
I0331 18:04:55.868673 33276 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0331 18:04:55.896702 33276 docker.go:639] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.3
registry.k8s.io/kube-scheduler:v1.26.3
registry.k8s.io/kube-controller-manager:v1.26.3
registry.k8s.io/kube-proxy:v1.26.3
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0331 18:04:55.896733 33276 docker.go:569] Images already preloaded, skipping extraction
I0331 18:04:55.896797 33276 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0331 18:04:55.924955 33276 docker.go:639] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.3
registry.k8s.io/kube-controller-manager:v1.26.3
registry.k8s.io/kube-scheduler:v1.26.3
registry.k8s.io/kube-proxy:v1.26.3
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0331 18:04:55.924992 33276 cache_images.go:84] Images are preloaded, skipping loading
I0331 18:04:55.925053 33276 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0331 18:04:55.965144 33276 cni.go:84] Creating CNI manager for ""
I0331 18:04:55.965172 33276 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0331 18:04:55.965185 33276 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0331 18:04:55.965205 33276 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.199 APIServerPort:8443 KubernetesVersion:v1.26.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-347180 NodeName:auto-347180 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0331 18:04:55.965393 33276 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.72.199
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "auto-347180"
kubeletExtraArgs:
node-ip: 192.168.72.199
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.72.199"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0331 18:04:55.965514 33276 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=auto-347180 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.199
[Install]
config:
{KubernetesVersion:v1.26.3 ClusterName:auto-347180 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0331 18:04:55.965613 33276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.3
I0331 18:04:55.975410 33276 binaries.go:44] Found k8s binaries, skipping transfer
I0331 18:04:55.975480 33276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0331 18:04:55.984755 33276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
I0331 18:04:56.009787 33276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0331 18:04:56.031312 33276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
I0331 18:04:56.049714 33276 ssh_runner.go:195] Run: grep 192.168.72.199 control-plane.minikube.internal$ /etc/hosts
I0331 18:04:56.054641 33276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.199 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0331 18:04:56.067876 33276 certs.go:56] Setting up /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180 for IP: 192.168.72.199
I0331 18:04:56.067912 33276 certs.go:186] acquiring lock for shared ca certs: {Name:mk5b2b979756b4a682c5be81dc53f006bb9a7a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0331 18:04:56.068110 33276 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16144-3494/.minikube/ca.key
I0331 18:04:56.068167 33276 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16144-3494/.minikube/proxy-client-ca.key
I0331 18:04:56.068278 33276 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.key
I0331 18:04:56.068308 33276 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key.217b3e23
I0331 18:04:56.068325 33276 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt.217b3e23 with IP's: [192.168.72.199 10.96.0.1 127.0.0.1 10.0.0.1]
I0331 18:04:56.209196 33276 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt.217b3e23 ...
I0331 18:04:56.209224 33276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt.217b3e23: {Name:mk3e4cd47c6706ab2f578dfdd08d80ebdd3c15fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0331 18:04:56.209429 33276 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key.217b3e23 ...
I0331 18:04:56.209445 33276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key.217b3e23: {Name:mk009817638857b2bbdb66530e778b671a0003f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0331 18:04:56.209547 33276 certs.go:333] copying /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt.217b3e23 -> /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt
I0331 18:04:56.209609 33276 certs.go:337] copying /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key.217b3e23 -> /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key
I0331 18:04:56.209656 33276 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.key
I0331 18:04:56.209668 33276 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.crt with IP's: []
I0331 18:04:56.257382 33276 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.crt ...
I0331 18:04:56.257405 33276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.crt: {Name:mk082703dadea0ea3251f4202bbf72399caa3a43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0331 18:04:56.257583 33276 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.key ...
I0331 18:04:56.257595 33276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.key: {Name:mk4b72bffb94c8b27e86fc5f7b2d38af391fe2ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0331 18:04:56.257819 33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/10540.pem (1338 bytes)
W0331 18:04:56.257876 33276 certs.go:397] ignoring /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/10540_empty.pem, impossibly tiny 0 bytes
I0331 18:04:56.257892 33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca-key.pem (1675 bytes)
I0331 18:04:56.257924 33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca.pem (1078 bytes)
I0331 18:04:56.257959 33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/cert.pem (1123 bytes)
I0331 18:04:56.257987 33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/key.pem (1679 bytes)
I0331 18:04:56.258026 33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/ssl/certs/105402.pem (1708 bytes)
I0331 18:04:56.258526 33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0331 18:04:56.287806 33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0331 18:04:56.314968 33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0331 18:04:56.338082 33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0331 18:04:56.360708 33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0331 18:04:56.390138 33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0331 18:04:56.419129 33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0331 18:04:56.447101 33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0331 18:04:56.472169 33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/ssl/certs/105402.pem --> /usr/share/ca-certificates/105402.pem (1708 bytes)
I0331 18:04:56.498664 33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0331 18:04:56.525516 33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/certs/10540.pem --> /usr/share/ca-certificates/10540.pem (1338 bytes)
I0331 18:04:56.548806 33276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0331 18:04:56.565642 33276 ssh_runner.go:195] Run: openssl version
I0331 18:04:56.571067 33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105402.pem && ln -fs /usr/share/ca-certificates/105402.pem /etc/ssl/certs/105402.pem"
I0331 18:04:56.580624 33276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105402.pem
I0331 18:04:56.585385 33276 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 31 17:26 /usr/share/ca-certificates/105402.pem
I0331 18:04:56.585449 33276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105402.pem
I0331 18:04:56.591662 33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105402.pem /etc/ssl/certs/3ec20f2e.0"
I0331 18:04:56.602558 33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0331 18:04:56.612933 33276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0331 18:04:56.619029 33276 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 31 17:21 /usr/share/ca-certificates/minikubeCA.pem
I0331 18:04:56.619087 33276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0331 18:04:56.626198 33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0331 18:04:56.639266 33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10540.pem && ln -fs /usr/share/ca-certificates/10540.pem /etc/ssl/certs/10540.pem"
I0331 18:04:56.649914 33276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10540.pem
I0331 18:04:56.654454 33276 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 31 17:26 /usr/share/ca-certificates/10540.pem
I0331 18:04:56.654515 33276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10540.pem
I0331 18:04:56.661570 33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10540.pem /etc/ssl/certs/51391683.0"
I0331 18:04:56.671169 33276 kubeadm.go:401] StartCluster: {Name:auto-347180 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16191/minikube-v1.29.0-1680115329-16191-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:auto-347
180 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0331 18:04:56.671303 33276 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0331 18:04:56.695923 33276 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0331 18:04:56.705641 33276 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0331 18:04:56.715247 33276 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0331 18:04:56.724602 33276 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0331 18:04:56.724655 33276 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0331 18:04:56.783971 33276 kubeadm.go:322] [init] Using Kubernetes version: v1.26.3
I0331 18:04:56.784098 33276 kubeadm.go:322] [preflight] Running pre-flight checks
I0331 18:04:56.929895 33276 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0331 18:04:56.930047 33276 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0331 18:04:56.930171 33276 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0331 18:04:57.156879 33276 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
*
* ==> Docker <==
* -- Journal begins at Fri 2023-03-31 18:02:19 UTC, ends at Fri 2023-03-31 18:04:57 UTC. --
Mar 31 18:04:32 pause-939189 dockerd[4567]: time="2023-03-31T18:04:32.708286788Z" level=warning msg="cleaning up after shim disconnected" id=b400c024f135f7c82274f810b9ce06d15d41eb95e87b7caae02c5db9542e56db namespace=moby
Mar 31 18:04:32 pause-939189 dockerd[4567]: time="2023-03-31T18:04:32.708340669Z" level=info msg="cleaning up dead shim" namespace=moby
Mar 31 18:04:32 pause-939189 cri-dockerd[5345]: W0331 18:04:32.836659 5345 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.348379648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.348500345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.348521902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.348533652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.357176945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.357265075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.357291341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.357305204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 31 18:04:39 pause-939189 cri-dockerd[5345]: time="2023-03-31T18:04:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.947465780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.947526265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.947543565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.947555826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.953976070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.954296632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.954453909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.954623054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 31 18:04:41 pause-939189 cri-dockerd[5345]: time="2023-03-31T18:04:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/11bb612576207ce6f9fdbde8dfa7f6235a96c8d3be559f2e51d8d4b173aa4b51/resolv.conf as [nameserver 192.168.122.1]"
Mar 31 18:04:41 pause-939189 dockerd[4567]: time="2023-03-31T18:04:41.977346347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 31 18:04:41 pause-939189 dockerd[4567]: time="2023-03-31T18:04:41.977635522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 31 18:04:41 pause-939189 dockerd[4567]: time="2023-03-31T18:04:41.977752683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 31 18:04:41 pause-939189 dockerd[4567]: time="2023-03-31T18:04:41.977778301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
1344b5c000a9d 5185b96f0becf 16 seconds ago Running coredns 2 11bb612576207
1686d0df28f10 92ed2bec97a63 17 seconds ago Running kube-proxy 3 18b52638ab7a1
5d40b2ef4a864 5a79047369329 22 seconds ago Running kube-scheduler 3 df301869b351d
80b600760e999 fce326961ae2d 22 seconds ago Running etcd 3 1089f600d6711
84de5d76d35ca ce8c2293ef09c 26 seconds ago Running kube-controller-manager 2 55c3c7ee9ca0a
966b1cd3b351e 1d9b3cbae03ce 28 seconds ago Running kube-apiserver 2 0afb944a4f151
a0ad0a35a3e08 fce326961ae2d 43 seconds ago Exited etcd 2 c447bce0c8aef
b4599f5bff86d 5a79047369329 43 seconds ago Exited kube-scheduler 2 6981b4d73a6c9
9999f58d27656 92ed2bec97a63 45 seconds ago Exited kube-proxy 2 f5b35d44675c8
b400c024f135f 5185b96f0becf 58 seconds ago Exited coredns 1 5e8b08d2a8f2f
874fcc56f9f62 1d9b3cbae03ce About a minute ago Exited kube-apiserver 1 4045aa0f265a1
8ace7d6c4bee4 ce8c2293ef09c About a minute ago Exited kube-controller-manager 1 b034146fe7e8c
*
* ==> coredns [1344b5c000a9] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] 127.0.0.1:58096 - 62967 "HINFO IN 3459962459257687508.4367275231804161359. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020935271s
*
* ==> coredns [b400c024f135] <==
* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] 127.0.0.1:42721 - 9088 "HINFO IN 8560628874867663181.8710474958470687856. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.051252273s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
*
* ==> describe nodes <==
* Name: pause-939189
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-939189
kubernetes.io/os=linux
minikube.k8s.io/commit=945b3fc45ee9ac8e1ceaffb00a71ec22c717b10e
minikube.k8s.io/name=pause-939189
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_03_31T18_03_00_0700
minikube.k8s.io/version=v1.29.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 31 Mar 2023 18:02:56 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-939189
AcquireTime: <unset>
RenewTime: Fri, 31 Mar 2023 18:04:49 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 31 Mar 2023 18:04:39 +0000 Fri, 31 Mar 2023 18:02:53 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 31 Mar 2023 18:04:39 +0000 Fri, 31 Mar 2023 18:02:53 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 31 Mar 2023 18:04:39 +0000 Fri, 31 Mar 2023 18:02:53 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 31 Mar 2023 18:04:39 +0000 Fri, 31 Mar 2023 18:03:00 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.142
Hostname: pause-939189
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017420Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017420Ki
pods: 110
System Info:
Machine ID: ff362cba6608463787695edbccc756af
System UUID: ff362cba-6608-4637-8769-5edbccc756af
Boot ID: 8edfbfeb-24ea-46a9-b4c5-e31dc2d1b4c1
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.23
Kubelet Version: v1.26.3
Kube-Proxy Version: v1.26.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-787d4945fb-hcrtc 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 106s
kube-system etcd-pause-939189 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 118s
kube-system kube-apiserver-pause-939189 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m1s
kube-system kube-controller-manager-pause-939189 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 118s
kube-system kube-proxy-jg8p6 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 106s
kube-system kube-scheduler-pause-939189 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 118s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 102s kube-proxy
Normal Starting 16s kube-proxy
Normal NodeHasSufficientMemory 2m8s (x4 over 2m8s) kubelet Node pause-939189 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m8s (x4 over 2m8s) kubelet Node pause-939189 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m8s (x4 over 2m8s) kubelet Node pause-939189 status is now: NodeHasSufficientPID
Normal NodeHasSufficientPID 118s kubelet Node pause-939189 status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 118s kubelet Node pause-939189 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 118s kubelet Node pause-939189 status is now: NodeHasNoDiskPressure
Normal NodeAllocatableEnforced 118s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 118s kubelet Node pause-939189 status is now: NodeReady
Normal Starting 118s kubelet Starting kubelet.
Normal RegisteredNode 107s node-controller Node pause-939189 event: Registered Node pause-939189 in Controller
Normal Starting 24s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 24s (x8 over 24s) kubelet Node pause-939189 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 24s (x8 over 24s) kubelet Node pause-939189 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 24s (x7 over 24s) kubelet Node pause-939189 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 24s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 6s node-controller Node pause-939189 event: Registered Node pause-939189 in Controller
*
* ==> dmesg <==
* [ +0.422579] systemd-fstab-generator[930]: Ignoring "noauto" for root device
[ +0.164482] systemd-fstab-generator[941]: Ignoring "noauto" for root device
[ +0.161981] systemd-fstab-generator[954]: Ignoring "noauto" for root device
[ +1.600832] systemd-fstab-generator[1102]: Ignoring "noauto" for root device
[ +0.111337] systemd-fstab-generator[1113]: Ignoring "noauto" for root device
[ +0.130984] systemd-fstab-generator[1124]: Ignoring "noauto" for root device
[ +0.124503] systemd-fstab-generator[1135]: Ignoring "noauto" for root device
[ +0.132321] systemd-fstab-generator[1149]: Ignoring "noauto" for root device
[ +4.351511] systemd-fstab-generator[1397]: Ignoring "noauto" for root device
[ +0.702241] kauditd_printk_skb: 68 callbacks suppressed
[ +9.105596] systemd-fstab-generator[2340]: Ignoring "noauto" for root device
[Mar31 18:03] kauditd_printk_skb: 8 callbacks suppressed
[ +5.099775] kauditd_printk_skb: 28 callbacks suppressed
[ +22.013414] systemd-fstab-generator[3826]: Ignoring "noauto" for root device
[ +0.416829] systemd-fstab-generator[3860]: Ignoring "noauto" for root device
[ +0.213956] systemd-fstab-generator[3871]: Ignoring "noauto" for root device
[ +0.230022] systemd-fstab-generator[3884]: Ignoring "noauto" for root device
[ +5.258034] kauditd_printk_skb: 4 callbacks suppressed
[ +6.349775] systemd-fstab-generator[4980]: Ignoring "noauto" for root device
[ +0.138234] systemd-fstab-generator[4991]: Ignoring "noauto" for root device
[ +0.169296] systemd-fstab-generator[5007]: Ignoring "noauto" for root device
[ +0.160988] systemd-fstab-generator[5056]: Ignoring "noauto" for root device
[ +0.226282] systemd-fstab-generator[5127]: Ignoring "noauto" for root device
[ +4.119790] kauditd_printk_skb: 37 callbacks suppressed
[Mar31 18:04] systemd-fstab-generator[7161]: Ignoring "noauto" for root device
*
* ==> etcd [80b600760e99] <==
* {"level":"warn","ts":"2023-03-31T18:04:50.753Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-03-31T18:04:50.314Z","time spent":"439.098122ms","remote":"127.0.0.1:52040","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6620,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-939189\" mod_revision:461 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-939189\" value_size:6558 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-939189\" > >"}
{"level":"warn","ts":"2023-03-31T18:04:50.754Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"212.221672ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:341"}
{"level":"info","ts":"2023-03-31T18:04:50.754Z","caller":"traceutil/trace.go:171","msg":"trace[1823280090] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:462; }","duration":"212.395865ms","start":"2023-03-31T18:04:50.542Z","end":"2023-03-31T18:04:50.754Z","steps":["trace[1823280090] 'agreement among raft nodes before linearized reading' (duration: 212.138709ms)"],"step_count":1}
{"level":"warn","ts":"2023-03-31T18:04:50.754Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"341.184734ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-939189\" ","response":"range_response_count:1 size:5480"}
{"level":"info","ts":"2023-03-31T18:04:50.754Z","caller":"traceutil/trace.go:171","msg":"trace[1705229913] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-939189; range_end:; response_count:1; response_revision:462; }","duration":"341.208794ms","start":"2023-03-31T18:04:50.413Z","end":"2023-03-31T18:04:50.754Z","steps":["trace[1705229913] 'agreement among raft nodes before linearized reading' (duration: 341.128291ms)"],"step_count":1}
{"level":"warn","ts":"2023-03-31T18:04:50.754Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-03-31T18:04:50.413Z","time spent":"341.245678ms","remote":"127.0.0.1:52040","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5504,"request content":"key:\"/registry/pods/kube-system/etcd-pause-939189\" "}
{"level":"warn","ts":"2023-03-31T18:04:51.208Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"258.605359ms","expected-duration":"100ms","prefix":"","request":"header:<ID:839788533735404794 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:0ba78738d7beb4f9>","response":"size:41"}
{"level":"info","ts":"2023-03-31T18:04:51.209Z","caller":"traceutil/trace.go:171","msg":"trace[2128410207] linearizableReadLoop","detail":"{readStateIndex:500; appliedIndex:499; }","duration":"296.499176ms","start":"2023-03-31T18:04:50.912Z","end":"2023-03-31T18:04:51.209Z","steps":["trace[2128410207] 'read index received' (duration: 37.740315ms)","trace[2128410207] 'applied index is now lower than readState.Index' (duration: 258.757557ms)"],"step_count":2}
{"level":"warn","ts":"2023-03-31T18:04:51.209Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"296.647465ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-939189\" ","response":"range_response_count:1 size:5480"}
{"level":"info","ts":"2023-03-31T18:04:51.209Z","caller":"traceutil/trace.go:171","msg":"trace[478960090] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-939189; range_end:; response_count:1; response_revision:462; }","duration":"296.673964ms","start":"2023-03-31T18:04:50.912Z","end":"2023-03-31T18:04:51.209Z","steps":["trace[478960090] 'agreement among raft nodes before linearized reading' (duration: 296.561324ms)"],"step_count":1}
{"level":"warn","ts":"2023-03-31T18:04:51.209Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-03-31T18:04:50.762Z","time spent":"447.271669ms","remote":"127.0.0.1:52016","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
{"level":"info","ts":"2023-03-31T18:04:52.108Z","caller":"traceutil/trace.go:171","msg":"trace[1920228168] linearizableReadLoop","detail":"{readStateIndex:502; appliedIndex:501; }","duration":"165.267816ms","start":"2023-03-31T18:04:51.943Z","end":"2023-03-31T18:04:52.108Z","steps":["trace[1920228168] 'read index received' (duration: 165.022721ms)","trace[1920228168] 'applied index is now lower than readState.Index' (duration: 244.277µs)"],"step_count":2}
{"level":"info","ts":"2023-03-31T18:04:52.110Z","caller":"traceutil/trace.go:171","msg":"trace[1687701317] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"176.741493ms","start":"2023-03-31T18:04:51.933Z","end":"2023-03-31T18:04:52.110Z","steps":["trace[1687701317] 'process raft request' (duration: 175.168227ms)"],"step_count":1}
{"level":"warn","ts":"2023-03-31T18:04:52.112Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"168.992818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
{"level":"info","ts":"2023-03-31T18:04:52.112Z","caller":"traceutil/trace.go:171","msg":"trace[1794617064] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:464; }","duration":"169.069396ms","start":"2023-03-31T18:04:51.943Z","end":"2023-03-31T18:04:52.112Z","steps":["trace[1794617064] 'agreement among raft nodes before linearized reading' (duration: 165.391165ms)"],"step_count":1}
{"level":"warn","ts":"2023-03-31T18:04:52.293Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"123.74239ms","expected-duration":"100ms","prefix":"","request":"header:<ID:839788533735404827 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-f9qtf\" mod_revision:390 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-f9qtf\" value_size:1239 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-f9qtf\" > >>","response":"size:16"}
{"level":"info","ts":"2023-03-31T18:04:52.294Z","caller":"traceutil/trace.go:171","msg":"trace[280136650] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"168.841202ms","start":"2023-03-31T18:04:52.125Z","end":"2023-03-31T18:04:52.294Z","steps":["trace[280136650] 'process raft request' (duration: 44.44482ms)","trace[280136650] 'compare' (duration: 123.644413ms)"],"step_count":2}
{"level":"info","ts":"2023-03-31T18:04:52.297Z","caller":"traceutil/trace.go:171","msg":"trace[929692375] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"142.41231ms","start":"2023-03-31T18:04:52.154Z","end":"2023-03-31T18:04:52.297Z","steps":["trace[929692375] 'process raft request' (duration: 142.313651ms)"],"step_count":1}
{"level":"info","ts":"2023-03-31T18:04:52.298Z","caller":"traceutil/trace.go:171","msg":"trace[1640521255] transaction","detail":"{read_only:false; response_revision:466; number_of_response:1; }","duration":"169.933179ms","start":"2023-03-31T18:04:52.128Z","end":"2023-03-31T18:04:52.298Z","steps":["trace[1640521255] 'process raft request' (duration: 168.949367ms)"],"step_count":1}
{"level":"info","ts":"2023-03-31T18:04:52.583Z","caller":"traceutil/trace.go:171","msg":"trace[1929288585] linearizableReadLoop","detail":"{readStateIndex:506; appliedIndex:505; }","duration":"170.211991ms","start":"2023-03-31T18:04:52.412Z","end":"2023-03-31T18:04:52.583Z","steps":["trace[1929288585] 'read index received' (duration: 128.7627ms)","trace[1929288585] 'applied index is now lower than readState.Index' (duration: 41.448583ms)"],"step_count":2}
{"level":"info","ts":"2023-03-31T18:04:52.583Z","caller":"traceutil/trace.go:171","msg":"trace[47408908] transaction","detail":"{read_only:false; response_revision:468; number_of_response:1; }","duration":"258.75753ms","start":"2023-03-31T18:04:52.324Z","end":"2023-03-31T18:04:52.583Z","steps":["trace[47408908] 'process raft request' (duration: 216.820717ms)","trace[47408908] 'compare' (duration: 41.26405ms)"],"step_count":2}
{"level":"warn","ts":"2023-03-31T18:04:52.584Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"171.519483ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-939189\" ","response":"range_response_count:1 size:5480"}
{"level":"info","ts":"2023-03-31T18:04:52.584Z","caller":"traceutil/trace.go:171","msg":"trace[1263506650] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-939189; range_end:; response_count:1; response_revision:468; }","duration":"171.595141ms","start":"2023-03-31T18:04:52.412Z","end":"2023-03-31T18:04:52.584Z","steps":["trace[1263506650] 'agreement among raft nodes before linearized reading' (duration: 171.444814ms)"],"step_count":1}
{"level":"warn","ts":"2023-03-31T18:04:52.584Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"150.725144ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2023-03-31T18:04:52.585Z","caller":"traceutil/trace.go:171","msg":"trace[213446996] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:468; }","duration":"150.795214ms","start":"2023-03-31T18:04:52.434Z","end":"2023-03-31T18:04:52.584Z","steps":["trace[213446996] 'agreement among raft nodes before linearized reading' (duration: 150.635678ms)"],"step_count":1}
*
* ==> etcd [a0ad0a35a3e0] <==
* {"level":"info","ts":"2023-03-31T18:04:14.959Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.39.142:2380"}
{"level":"info","ts":"2023-03-31T18:04:14.959Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.39.142:2380"}
{"level":"info","ts":"2023-03-31T18:04:14.959Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-03-31T18:04:14.962Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"d7a5d3e20a6b0ba7","initial-advertise-peer-urls":["https://192.168.39.142:2380"],"listen-peer-urls":["https://192.168.39.142:2380"],"advertise-client-urls":["https://192.168.39.142:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.142:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-03-31T18:04:14.962Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 is starting a new election at term 3"}
{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 became pre-candidate at term 3"}
{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 received MsgPreVoteResp from d7a5d3e20a6b0ba7 at term 3"}
{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 became candidate at term 4"}
{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 received MsgVoteResp from d7a5d3e20a6b0ba7 at term 4"}
{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 became leader at term 4"}
{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d7a5d3e20a6b0ba7 elected leader d7a5d3e20a6b0ba7 at term 4"}
{"level":"info","ts":"2023-03-31T18:04:15.341Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"d7a5d3e20a6b0ba7","local-member-attributes":"{Name:pause-939189 ClientURLs:[https://192.168.39.142:2379]}","request-path":"/0/members/d7a5d3e20a6b0ba7/attributes","cluster-id":"f7d6b5428c0c9dc0","publish-timeout":"7s"}
{"level":"info","ts":"2023-03-31T18:04:15.341Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-31T18:04:15.342Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-03-31T18:04:15.342Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-31T18:04:15.343Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.142:2379"}
{"level":"info","ts":"2023-03-31T18:04:15.347Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-03-31T18:04:15.347Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-03-31T18:04:27.719Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2023-03-31T18:04:27.719Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-939189","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.142:2380"],"advertise-client-urls":["https://192.168.39.142:2379"]}
{"level":"info","ts":"2023-03-31T18:04:27.723Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d7a5d3e20a6b0ba7","current-leader-member-id":"d7a5d3e20a6b0ba7"}
{"level":"info","ts":"2023-03-31T18:04:27.727Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.39.142:2380"}
{"level":"info","ts":"2023-03-31T18:04:27.728Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.39.142:2380"}
{"level":"info","ts":"2023-03-31T18:04:27.728Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-939189","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.142:2380"],"advertise-client-urls":["https://192.168.39.142:2379"]}
*
* ==> kernel <==
* 18:04:58 up 2 min, 0 users, load average: 2.10, 1.02, 0.39
Linux pause-939189 5.10.57 #1 SMP Wed Mar 29 23:38:32 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [874fcc56f9f6] <==
* W0331 18:04:09.094355 1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0331 18:04:10.570941 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0331 18:04:14.640331 1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
E0331 18:04:19.527936 1 run.go:74] "command failed" err="context deadline exceeded"
*
* ==> kube-apiserver [966b1cd3b351] <==
* I0331 18:04:39.222688 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0331 18:04:39.205515 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
I0331 18:04:39.314255 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
I0331 18:04:39.316506 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0331 18:04:39.317062 1 shared_informer.go:280] Caches are synced for configmaps
I0331 18:04:39.318946 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0331 18:04:39.323304 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0331 18:04:39.338800 1 apf_controller.go:366] Running API Priority and Fairness config worker
I0331 18:04:39.338942 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
I0331 18:04:39.339358 1 shared_informer.go:280] Caches are synced for crd-autoregister
I0331 18:04:39.397474 1 shared_informer.go:280] Caches are synced for node_authorizer
I0331 18:04:39.418720 1 cache.go:39] Caches are synced for autoregister controller
I0331 18:04:39.958002 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0331 18:04:40.221547 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0331 18:04:41.099152 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0331 18:04:41.124185 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0331 18:04:41.212998 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0331 18:04:41.267710 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0331 18:04:41.286487 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0331 18:04:51.284113 1 trace.go:219] Trace[2025945949]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.142,type:*v1.Endpoints,resource:apiServerIPInfo (31-Mar-2023 18:04:50.760) (total time: 523ms):
Trace[2025945949]: ---"Transaction prepared" 449ms (18:04:51.210)
Trace[2025945949]: ---"Txn call completed" 73ms (18:04:51.284)
Trace[2025945949]: [523.960493ms] [523.960493ms] END
I0331 18:04:51.929561 1 controller.go:615] quota admission added evaluator for: endpoints
I0331 18:04:52.124697 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
*
* ==> kube-controller-manager [84de5d76d35c] <==
* W0331 18:04:52.065251 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="pause-939189" does not exist
I0331 18:04:52.067639 1 shared_informer.go:280] Caches are synced for resource quota
I0331 18:04:52.076620 1 shared_informer.go:280] Caches are synced for attach detach
I0331 18:04:52.084564 1 shared_informer.go:280] Caches are synced for daemon sets
I0331 18:04:52.087592 1 shared_informer.go:280] Caches are synced for endpoint_slice
I0331 18:04:52.100706 1 shared_informer.go:280] Caches are synced for node
I0331 18:04:52.100905 1 range_allocator.go:167] Sending events to api server.
I0331 18:04:52.101097 1 range_allocator.go:171] Starting range CIDR allocator
I0331 18:04:52.101132 1 shared_informer.go:273] Waiting for caches to sync for cidrallocator
I0331 18:04:52.101145 1 shared_informer.go:280] Caches are synced for cidrallocator
I0331 18:04:52.109512 1 shared_informer.go:280] Caches are synced for GC
I0331 18:04:52.110949 1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring
I0331 18:04:52.111820 1 shared_informer.go:280] Caches are synced for resource quota
I0331 18:04:52.151113 1 shared_informer.go:280] Caches are synced for taint
I0331 18:04:52.151644 1 shared_informer.go:280] Caches are synced for TTL
I0331 18:04:52.151696 1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone:
W0331 18:04:52.152283 1 node_lifecycle_controller.go:1053] Missing timestamp for Node pause-939189. Assuming now as a timestamp.
I0331 18:04:52.152564 1 node_lifecycle_controller.go:1254] Controller detected that zone is now in state Normal.
I0331 18:04:52.152806 1 taint_manager.go:206] "Starting NoExecuteTaintManager"
I0331 18:04:52.153068 1 taint_manager.go:211] "Sending events to api server"
I0331 18:04:52.154301 1 event.go:294] "Event occurred" object="pause-939189" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-939189 event: Registered Node pause-939189 in Controller"
I0331 18:04:52.157444 1 shared_informer.go:280] Caches are synced for persistent volume
I0331 18:04:52.506059 1 shared_informer.go:280] Caches are synced for garbage collector
I0331 18:04:52.506479 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0331 18:04:52.533136 1 shared_informer.go:280] Caches are synced for garbage collector
*
* ==> kube-controller-manager [8ace7d6c4bee] <==
* I0331 18:03:59.321744 1 serving.go:348] Generated self-signed cert in-memory
I0331 18:03:59.853937 1 controllermanager.go:182] Version: v1.26.3
I0331 18:03:59.853990 1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0331 18:03:59.855979 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0331 18:03:59.856127 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0331 18:03:59.856668 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0331 18:03:59.856802 1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
F0331 18:04:20.535428 1 controllermanager.go:228] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
*
* ==> kube-proxy [1686d0df28f1] <==
* I0331 18:04:41.170371 1 node.go:163] Successfully retrieved node IP: 192.168.39.142
I0331 18:04:41.170425 1 server_others.go:109] "Detected node IP" address="192.168.39.142"
I0331 18:04:41.170450 1 server_others.go:535] "Using iptables proxy"
I0331 18:04:41.271349 1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0331 18:04:41.271390 1 server_others.go:176] "Using iptables Proxier"
I0331 18:04:41.271446 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0331 18:04:41.271898 1 server.go:655] "Version info" version="v1.26.3"
I0331 18:04:41.271978 1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0331 18:04:41.276289 1 config.go:317] "Starting service config controller"
I0331 18:04:41.276432 1 shared_informer.go:273] Waiting for caches to sync for service config
I0331 18:04:41.276461 1 config.go:226] "Starting endpoint slice config controller"
I0331 18:04:41.276465 1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
I0331 18:04:41.277123 1 config.go:444] "Starting node config controller"
I0331 18:04:41.277131 1 shared_informer.go:273] Waiting for caches to sync for node config
I0331 18:04:41.376963 1 shared_informer.go:280] Caches are synced for endpoint slice config
I0331 18:04:41.377002 1 shared_informer.go:280] Caches are synced for service config
I0331 18:04:41.377248 1 shared_informer.go:280] Caches are synced for node config
*
* ==> kube-proxy [9999f58d2765] <==
* E0331 18:04:20.538153 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-939189": dial tcp 192.168.39.142:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.142:56890->192.168.39.142:8443: read: connection reset by peer
E0331 18:04:21.665395 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-939189": dial tcp 192.168.39.142:8443: connect: connection refused
E0331 18:04:23.920058 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-939189": dial tcp 192.168.39.142:8443: connect: connection refused
*
* ==> kube-scheduler [5d40b2ef4a86] <==
* I0331 18:04:36.274158 1 serving.go:348] Generated self-signed cert in-memory
W0331 18:04:39.233042 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0331 18:04:39.233351 1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0331 18:04:39.233637 1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
W0331 18:04:39.233672 1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0331 18:04:39.306413 1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.3"
I0331 18:04:39.306462 1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0331 18:04:39.308017 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0331 18:04:39.308563 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0331 18:04:39.308610 1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0331 18:04:39.308627 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0331 18:04:39.409801 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [b4599f5bff86] <==
* E0331 18:04:24.036070 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.142:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
W0331 18:04:24.450493 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.39.142:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
E0331 18:04:24.450560 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.142:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
W0331 18:04:24.681951 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
E0331 18:04:24.682039 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
W0331 18:04:24.877656 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
E0331 18:04:24.878016 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
W0331 18:04:24.900986 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.39.142:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
E0331 18:04:24.901338 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.142:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
W0331 18:04:24.987726 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
E0331 18:04:24.988045 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
W0331 18:04:25.024394 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.142:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
E0331 18:04:25.024478 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.142:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
W0331 18:04:25.132338 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.39.142:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
E0331 18:04:25.132589 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.142:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
W0331 18:04:26.745186 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.39.142:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
E0331 18:04:26.745273 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.142:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
W0331 18:04:26.909186 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.142:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
E0331 18:04:26.909259 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.142:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
W0331 18:04:27.588118 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.39.142:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
E0331 18:04:27.588180 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.142:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
I0331 18:04:27.668688 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0331 18:04:27.668780 1 shared_informer.go:276] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0331 18:04:27.668791 1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
E0331 18:04:27.669106 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kubelet <==
* -- Journal begins at Fri 2023-03-31 18:02:19 UTC, ends at Fri 2023-03-31 18:04:58 UTC. --
Mar 31 18:04:35 pause-939189 kubelet[7167]: I0331 18:04:35.060753 7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbeb3c050b5e21453f641a818794f61-kubeconfig\") pod \"kube-controller-manager-pause-939189\" (UID: \"5bbeb3c050b5e21453f641a818794f61\") " pod="kube-system/kube-controller-manager-pause-939189"
Mar 31 18:04:35 pause-939189 kubelet[7167]: I0331 18:04:35.060806 7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbeb3c050b5e21453f641a818794f61-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-939189\" (UID: \"5bbeb3c050b5e21453f641a818794f61\") " pod="kube-system/kube-controller-manager-pause-939189"
Mar 31 18:04:35 pause-939189 kubelet[7167]: I0331 18:04:35.173548 7167 scope.go:115] "RemoveContainer" containerID="a0ad0a35a3e08720ef402cc44066aa6415d3380188ccf061278936b018f9164f"
Mar 31 18:04:35 pause-939189 kubelet[7167]: I0331 18:04:35.206303 7167 scope.go:115] "RemoveContainer" containerID="b4599f5bff86da254627b8fa420dbfa886e737fe4bf8140cd8ac5ec3f882a89e"
Mar 31 18:04:35 pause-939189 kubelet[7167]: I0331 18:04:35.871491 7167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5b35d44675c82be44631616cd6f0a52aa1dc911e88776342deacc611d359e35"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.403200 7167 kubelet_node_status.go:108] "Node was previously registered" node="pause-939189"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.403314 7167 kubelet_node_status.go:73] "Successfully registered node" node="pause-939189"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.406119 7167 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.407529 7167 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.534534 7167 apiserver.go:52] "Watching apiserver"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.537600 7167 topology_manager.go:210] "Topology Admit Handler"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.537920 7167 topology_manager.go:210] "Topology Admit Handler"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.561329 7167 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.592448 7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd3378f4-948b-4bec-abd3-ea9dc35d3259-xtables-lock\") pod \"kube-proxy-jg8p6\" (UID: \"dd3378f4-948b-4bec-abd3-ea9dc35d3259\") " pod="kube-system/kube-proxy-jg8p6"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.592793 7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e78e1f9-1a39-4c02-a4e9-51e5b268d077-config-volume\") pod \"coredns-787d4945fb-hcrtc\" (UID: \"1e78e1f9-1a39-4c02-a4e9-51e5b268d077\") " pod="kube-system/coredns-787d4945fb-hcrtc"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.593000 7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxlhf\" (UniqueName: \"kubernetes.io/projected/dd3378f4-948b-4bec-abd3-ea9dc35d3259-kube-api-access-nxlhf\") pod \"kube-proxy-jg8p6\" (UID: \"dd3378f4-948b-4bec-abd3-ea9dc35d3259\") " pod="kube-system/kube-proxy-jg8p6"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.593182 7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd3378f4-948b-4bec-abd3-ea9dc35d3259-lib-modules\") pod \"kube-proxy-jg8p6\" (UID: \"dd3378f4-948b-4bec-abd3-ea9dc35d3259\") " pod="kube-system/kube-proxy-jg8p6"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.593344 7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n26cp\" (UniqueName: \"kubernetes.io/projected/1e78e1f9-1a39-4c02-a4e9-51e5b268d077-kube-api-access-n26cp\") pod \"coredns-787d4945fb-hcrtc\" (UID: \"1e78e1f9-1a39-4c02-a4e9-51e5b268d077\") " pod="kube-system/coredns-787d4945fb-hcrtc"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.593511 7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dd3378f4-948b-4bec-abd3-ea9dc35d3259-kube-proxy\") pod \"kube-proxy-jg8p6\" (UID: \"dd3378f4-948b-4bec-abd3-ea9dc35d3259\") " pod="kube-system/kube-proxy-jg8p6"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.593631 7167 reconciler.go:41] "Reconciler: start to sync state"
Mar 31 18:04:40 pause-939189 kubelet[7167]: I0331 18:04:40.739124 7167 scope.go:115] "RemoveContainer" containerID="9999f58d276569aa698d96721d17b94fa850bf4239d5df11ce622ad76d4c9c20"
Mar 31 18:04:40 pause-939189 kubelet[7167]: I0331 18:04:40.900279 7167 request.go:690] Waited for 1.195299342s due to client-side throttling, not priority and fairness, request: PATCH:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-939189/status
Mar 31 18:04:41 pause-939189 kubelet[7167]: I0331 18:04:41.825587 7167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11bb612576207ce6f9fdbde8dfa7f6235a96c8d3be559f2e51d8d4b173aa4b51"
Mar 31 18:04:43 pause-939189 kubelet[7167]: I0331 18:04:43.869081 7167 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
Mar 31 18:04:45 pause-939189 kubelet[7167]: I0331 18:04:45.920720 7167 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-939189 -n pause-939189
helpers_test.go:261: (dbg) Run: kubectl --context pause-939189 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-939189 -n pause-939189
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p pause-939189 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-939189 logs -n 25: (1.269869644s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |---------|------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| start | -p stopped-upgrade-202435 | stopped-upgrade-202435 | jenkins | v1.29.0 | 31 Mar 23 18:00 UTC | 31 Mar 23 18:02 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=kvm2 | | | | | |
| start | -p kubernetes-upgrade-075589 | kubernetes-upgrade-075589 | jenkins | v1.29.0 | 31 Mar 23 18:01 UTC | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --driver=kvm2 | | | | | |
| start | -p kubernetes-upgrade-075589 | kubernetes-upgrade-075589 | jenkins | v1.29.0 | 31 Mar 23 18:01 UTC | 31 Mar 23 18:02 UTC |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.27.0-rc.0 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=kvm2 | | | | | |
| delete | -p cert-expiration-549601 | cert-expiration-549601 | jenkins | v1.29.0 | 31 Mar 23 18:01 UTC | 31 Mar 23 18:01 UTC |
| start | -p pause-939189 --memory=2048 | pause-939189 | jenkins | v1.29.0 | 31 Mar 23 18:01 UTC | 31 Mar 23 18:03 UTC |
| | --install-addons=false | | | | | |
| | --wait=all --driver=kvm2 | | | | | |
| cache | gvisor-836132 cache add | gvisor-836132 | jenkins | v1.29.0 | 31 Mar 23 18:01 UTC | 31 Mar 23 18:02 UTC |
| | gcr.io/k8s-minikube/gvisor-addon:2 | | | | | |
| addons | gvisor-836132 addons enable | gvisor-836132 | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:02 UTC |
| | gvisor | | | | | |
| delete | -p stopped-upgrade-202435 | stopped-upgrade-202435 | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:02 UTC |
| start | -p force-systemd-env-066234 | force-systemd-env-066234 | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:03 UTC |
| | --memory=2048 | | | | | |
| | --alsologtostderr -v=5 | | | | | |
| | --driver=kvm2 | | | | | |
| delete | -p kubernetes-upgrade-075589 | kubernetes-upgrade-075589 | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:02 UTC |
| start | -p cert-options-885841 | cert-options-885841 | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:04 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=kvm2 | | | | | |
| stop | -p gvisor-836132 | gvisor-836132 | jenkins | v1.29.0 | 31 Mar 23 18:02 UTC | 31 Mar 23 18:04 UTC |
| start | -p pause-939189 | pause-939189 | jenkins | v1.29.0 | 31 Mar 23 18:03 UTC | 31 Mar 23 18:04 UTC |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=kvm2 | | | | | |
| ssh | force-systemd-env-066234 | force-systemd-env-066234 | jenkins | v1.29.0 | 31 Mar 23 18:03 UTC | 31 Mar 23 18:03 UTC |
| | ssh docker info --format | | | | | |
| | {{.CgroupDriver}} | | | | | |
| delete | -p force-systemd-env-066234 | force-systemd-env-066234 | jenkins | v1.29.0 | 31 Mar 23 18:03 UTC | 31 Mar 23 18:03 UTC |
| start | -p NoKubernetes-746317 | NoKubernetes-746317 | jenkins | v1.29.0 | 31 Mar 23 18:03 UTC | |
| | --no-kubernetes | | | | | |
| | --kubernetes-version=1.20 | | | | | |
| | --driver=kvm2 | | | | | |
| start | -p NoKubernetes-746317 | NoKubernetes-746317 | jenkins | v1.29.0 | 31 Mar 23 18:03 UTC | |
| | --driver=kvm2 | | | | | |
| ssh | cert-options-885841 ssh | cert-options-885841 | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | 31 Mar 23 18:04 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-885841 -- sudo | cert-options-885841 | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | 31 Mar 23 18:04 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-885841 | cert-options-885841 | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | 31 Mar 23 18:04 UTC |
| start | -p auto-347180 --memory=3072 | auto-347180 | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | |
| | --alsologtostderr --wait=true | | | | | |
| | --wait-timeout=15m | | | | | |
| | --driver=kvm2 | | | | | |
| start | -p NoKubernetes-746317 | NoKubernetes-746317 | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | 31 Mar 23 18:04 UTC |
| | --no-kubernetes --driver=kvm2 | | | | | |
| start | -p gvisor-836132 --memory=2200 | gvisor-836132 | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | |
| | --container-runtime=containerd --docker-opt | | | | | |
| | containerd=/var/run/containerd/containerd.sock | | | | | |
| | --driver=kvm2 | | | | | |
| delete | -p NoKubernetes-746317 | NoKubernetes-746317 | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | 31 Mar 23 18:04 UTC |
| start | -p NoKubernetes-746317 | NoKubernetes-746317 | jenkins | v1.29.0 | 31 Mar 23 18:04 UTC | |
| | --no-kubernetes --driver=kvm2 | | | | | |
|---------|------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/03/31 18:04:52
Running on machine: ubuntu-20-agent-3
Binary: Built with gc go1.20.2 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0331 18:04:52.112989 33820 out.go:296] Setting OutFile to fd 1 ...
I0331 18:04:52.113170 33820 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0331 18:04:52.113174 33820 out.go:309] Setting ErrFile to fd 2...
I0331 18:04:52.113180 33820 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0331 18:04:52.113343 33820 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16144-3494/.minikube/bin
I0331 18:04:52.114025 33820 out.go:303] Setting JSON to false
I0331 18:04:52.115095 33820 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2843,"bootTime":1680283049,"procs":255,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1031-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0331 18:04:52.115161 33820 start.go:135] virtualization: kvm guest
I0331 18:04:52.202763 33820 out.go:177] * [NoKubernetes-746317] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0331 18:04:52.295981 33820 out.go:177] - MINIKUBE_LOCATION=16144
I0331 18:04:52.295891 33820 notify.go:220] Checking for updates...
I0331 18:04:52.419505 33820 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0331 18:04:52.544450 33820 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/16144-3494/kubeconfig
I0331 18:04:52.604388 33820 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/16144-3494/.minikube
I0331 18:04:52.606360 33820 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0331 18:04:52.608233 33820 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0331 18:04:52.610384 33820 config.go:182] Loaded profile config "auto-347180": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0331 18:04:52.610538 33820 config.go:182] Loaded profile config "gvisor-836132": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.3
I0331 18:04:52.610724 33820 config.go:182] Loaded profile config "pause-939189": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0331 18:04:52.610745 33820 start.go:1732] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
I0331 18:04:52.610778 33820 driver.go:365] Setting default libvirt URI to qemu:///system
I0331 18:04:52.649175 33820 out.go:177] * Using the kvm2 driver based on user configuration
I0331 18:04:52.650741 33820 start.go:295] selected driver: kvm2
I0331 18:04:52.650750 33820 start.go:859] validating driver "kvm2" against <nil>
I0331 18:04:52.650762 33820 start.go:870] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0331 18:04:52.651120 33820 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0331 18:04:52.651207 33820 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16144-3494/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0331 18:04:52.665942 33820 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
I0331 18:04:52.665977 33820 start.go:1732] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
I0331 18:04:52.665987 33820 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0331 18:04:52.666616 33820 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32101MB, container=0MB
I0331 18:04:52.666788 33820 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
I0331 18:04:52.666808 33820 cni.go:84] Creating CNI manager for ""
I0331 18:04:52.666818 33820 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0331 18:04:52.666825 33820 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0331 18:04:52.666832 33820 start_flags.go:319] config:
{Name:NoKubernetes-746317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:NoKubernetes-746317 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0331 18:04:52.666906 33820 start.go:1732] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
I0331 18:04:52.666977 33820 iso.go:125] acquiring lock: {Name:mk48583bcdf05c8e72651ed56790356a32c028b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0331 18:04:52.669123 33820 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-746317
I0331 18:04:48.155281 33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
I0331 18:04:48.155871 33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
I0331 18:04:48.155896 33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:48.155813 33603 retry.go:31] will retry after 283.128145ms: waiting for machine to come up
I0331 18:04:48.440401 33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
I0331 18:04:48.440902 33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
I0331 18:04:48.440924 33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:48.440860 33603 retry.go:31] will retry after 410.682274ms: waiting for machine to come up
I0331 18:04:48.853565 33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
I0331 18:04:48.854037 33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
I0331 18:04:48.854052 33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:48.854000 33603 retry.go:31] will retry after 497.486632ms: waiting for machine to come up
I0331 18:04:49.353711 33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
I0331 18:04:49.354221 33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
I0331 18:04:49.354243 33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:49.354178 33603 retry.go:31] will retry after 611.052328ms: waiting for machine to come up
I0331 18:04:49.967240 33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
I0331 18:04:50.040539 33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
I0331 18:04:50.040577 33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:50.040409 33603 retry.go:31] will retry after 763.986572ms: waiting for machine to come up
I0331 18:04:50.876927 33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
I0331 18:04:50.877366 33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
I0331 18:04:50.877457 33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:50.877308 33603 retry.go:31] will retry after 955.134484ms: waiting for machine to come up
I0331 18:04:51.834716 33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
I0331 18:04:51.835256 33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
I0331 18:04:51.835316 33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:51.835243 33603 retry.go:31] will retry after 1.216587491s: waiting for machine to come up
I0331 18:04:53.053498 33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
I0331 18:04:53.054031 33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
I0331 18:04:53.054059 33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:53.053989 33603 retry.go:31] will retry after 1.334972483s: waiting for machine to come up
I0331 18:04:50.765070 32536 pod_ready.go:102] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"False"
I0331 18:04:52.921656 32536 pod_ready.go:102] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"False"
I0331 18:04:53.421399 32536 pod_ready.go:92] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:53.421429 32536 pod_ready.go:81] duration metric: took 7.01965493s waiting for pod "etcd-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.421441 32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.429675 32536 pod_ready.go:92] pod "kube-apiserver-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:53.429697 32536 pod_ready.go:81] duration metric: took 8.249323ms waiting for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.429708 32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.438704 32536 pod_ready.go:92] pod "kube-controller-manager-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:53.438720 32536 pod_ready.go:81] duration metric: took 9.003572ms waiting for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.438731 32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.446519 32536 pod_ready.go:92] pod "kube-proxy-jg8p6" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:53.446534 32536 pod_ready.go:81] duration metric: took 7.795873ms waiting for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.446545 32536 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.451227 32536 pod_ready.go:92] pod "kube-scheduler-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:53.451242 32536 pod_ready.go:81] duration metric: took 4.691126ms waiting for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:53.451250 32536 pod_ready.go:38] duration metric: took 12.105730649s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0331 18:04:53.451272 32536 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0331 18:04:53.463906 32536 ops.go:34] apiserver oom_adj: -16
I0331 18:04:53.463925 32536 kubeadm.go:637] restartCluster took 55.388480099s
I0331 18:04:53.463933 32536 kubeadm.go:403] StartCluster complete in 55.545742823s
I0331 18:04:53.463952 32536 settings.go:142] acquiring lock: {Name:mk54cf97b6d1b5b12dec7aad9dd26d754e62bcd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0331 18:04:53.464032 32536 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/16144-3494/kubeconfig
I0331 18:04:53.464825 32536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/kubeconfig: {Name:mk0e63c10dbce63578041d9db05c951415a42011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0331 18:04:53.465096 32536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0331 18:04:53.465243 32536 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
I0331 18:04:53.465315 32536 config.go:182] Loaded profile config "pause-939189": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0331 18:04:53.465367 32536 cache.go:107] acquiring lock: {Name:mka2cf660dd4d542e74644eb9f55d9546287db85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0331 18:04:53.465432 32536 cache.go:115] /home/jenkins/minikube-integration/16144-3494/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
I0331 18:04:53.468377 32536 out.go:177] * Enabled addons:
I0331 18:04:53.465440 32536 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/16144-3494/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 77.875µs
I0331 18:04:53.465689 32536 kapi.go:59] client config for pause-939189: &rest.Config{Host:"https://192.168.39.142:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/client.crt", KeyFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/profiles/pause-939189/client.key", CAFile:"/home/jenkins/minikube-integration/16144-3494/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x192bee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0331 18:04:53.469869 32536 addons.go:499] enable addons completed in 4.62348ms: enabled=[]
I0331 18:04:53.469887 32536 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/16144-3494/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
I0331 18:04:53.469904 32536 cache.go:87] Successfully saved all images to host disk.
I0331 18:04:53.470079 32536 config.go:182] Loaded profile config "pause-939189": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0331 18:04:53.470390 32536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0331 18:04:53.470414 32536 main.go:141] libmachine: Launching plugin server for driver kvm2
I0331 18:04:53.472779 32536 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-939189" context rescaled to 1 replicas
I0331 18:04:53.472816 32536 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0331 18:04:53.474464 32536 out.go:177] * Verifying Kubernetes components...
I0331 18:04:49.689822 33276 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.064390662s)
I0331 18:04:49.689845 33276 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0331 18:04:49.730226 33276 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0331 18:04:49.740534 33276 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
I0331 18:04:49.759896 33276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0331 18:04:49.892044 33276 ssh_runner.go:195] Run: sudo systemctl restart docker
I0331 18:04:52.833806 33276 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.941720773s)
I0331 18:04:52.833863 33276 start.go:481] detecting cgroup driver to use...
I0331 18:04:52.833984 33276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0331 18:04:52.856132 33276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0331 18:04:52.867005 33276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0331 18:04:52.875838 33276 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0331 18:04:52.875899 33276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0331 18:04:52.885209 33276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0331 18:04:52.895294 33276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0331 18:04:52.906080 33276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0331 18:04:52.916021 33276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0331 18:04:52.927401 33276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0331 18:04:52.936940 33276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0331 18:04:52.945127 33276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0331 18:04:52.953052 33276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0331 18:04:53.053440 33276 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0331 18:04:53.071425 33276 start.go:481] detecting cgroup driver to use...
I0331 18:04:53.071501 33276 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0331 18:04:53.090019 33276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0331 18:04:53.104446 33276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0331 18:04:53.123957 33276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0331 18:04:53.139648 33276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0331 18:04:53.155612 33276 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0331 18:04:53.186101 33276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0331 18:04:53.202708 33276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0331 18:04:53.222722 33276 ssh_runner.go:195] Run: which cri-dockerd
I0331 18:04:53.227094 33276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0331 18:04:53.236406 33276 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0331 18:04:53.252225 33276 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0331 18:04:53.363704 33276 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0331 18:04:53.479794 33276 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
I0331 18:04:53.479826 33276 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0331 18:04:53.502900 33276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0331 18:04:53.633618 33276 ssh_runner.go:195] Run: sudo systemctl restart docker
I0331 18:04:53.475854 32536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0331 18:04:53.487310 32536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41011
I0331 18:04:53.487911 32536 main.go:141] libmachine: () Calling .GetVersion
I0331 18:04:53.488552 32536 main.go:141] libmachine: Using API Version 1
I0331 18:04:53.488581 32536 main.go:141] libmachine: () Calling .SetConfigRaw
I0331 18:04:53.488899 32536 main.go:141] libmachine: () Calling .GetMachineName
I0331 18:04:53.489075 32536 main.go:141] libmachine: (pause-939189) Calling .GetState
I0331 18:04:53.491520 32536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0331 18:04:53.491556 32536 main.go:141] libmachine: Launching plugin server for driver kvm2
I0331 18:04:53.508789 32536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
I0331 18:04:53.509289 32536 main.go:141] libmachine: () Calling .GetVersion
I0331 18:04:53.509835 32536 main.go:141] libmachine: Using API Version 1
I0331 18:04:53.509862 32536 main.go:141] libmachine: () Calling .SetConfigRaw
I0331 18:04:53.510320 32536 main.go:141] libmachine: () Calling .GetMachineName
I0331 18:04:53.510605 32536 main.go:141] libmachine: (pause-939189) Calling .DriverName
I0331 18:04:53.510836 32536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0331 18:04:53.510866 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHHostname
I0331 18:04:53.514674 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:04:53.515275 32536 main.go:141] libmachine: (pause-939189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:0d:fe", ip: ""} in network mk-pause-939189: {Iface:virbr1 ExpiryTime:2023-03-31 19:02:23 +0000 UTC Type:0 Mac:52:54:00:80:0d:fe Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:pause-939189 Clientid:01:52:54:00:80:0d:fe}
I0331 18:04:53.515296 32536 main.go:141] libmachine: (pause-939189) DBG | domain pause-939189 has defined IP address 192.168.39.142 and MAC address 52:54:00:80:0d:fe in network mk-pause-939189
I0331 18:04:53.515586 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHPort
I0331 18:04:53.515793 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHKeyPath
I0331 18:04:53.515965 32536 main.go:141] libmachine: (pause-939189) Calling .GetSSHUsername
I0331 18:04:53.516121 32536 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16144-3494/.minikube/machines/pause-939189/id_rsa Username:docker}
I0331 18:04:53.632891 32536 node_ready.go:35] waiting up to 6m0s for node "pause-939189" to be "Ready" ...
I0331 18:04:53.633113 32536 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0331 18:04:53.637258 32536 node_ready.go:49] node "pause-939189" has status "Ready":"True"
I0331 18:04:53.637275 32536 node_ready.go:38] duration metric: took 4.35255ms waiting for node "pause-939189" to be "Ready" ...
I0331 18:04:53.637285 32536 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0331 18:04:53.668203 32536 docker.go:639] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.26.3
registry.k8s.io/kube-scheduler:v1.26.3
registry.k8s.io/kube-controller-manager:v1.26.3
registry.k8s.io/kube-proxy:v1.26.3
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0331 18:04:53.668226 32536 cache_images.go:84] Images are preloaded, skipping loading
I0331 18:04:53.668235 32536 cache_images.go:262] succeeded pushing to: pause-939189
I0331 18:04:53.668239 32536 cache_images.go:263] failed pushing to:
I0331 18:04:53.668267 32536 main.go:141] libmachine: Making call to close driver server
I0331 18:04:53.668284 32536 main.go:141] libmachine: (pause-939189) Calling .Close
I0331 18:04:53.668596 32536 main.go:141] libmachine: Successfully made call to close driver server
I0331 18:04:53.668613 32536 main.go:141] libmachine: Making call to close connection to plugin binary
I0331 18:04:53.668625 32536 main.go:141] libmachine: (pause-939189) DBG | Closing plugin on server side
I0331 18:04:53.668625 32536 main.go:141] libmachine: Making call to close driver server
I0331 18:04:53.668641 32536 main.go:141] libmachine: (pause-939189) Calling .Close
I0331 18:04:53.668916 32536 main.go:141] libmachine: (pause-939189) DBG | Closing plugin on server side
I0331 18:04:53.668922 32536 main.go:141] libmachine: Successfully made call to close driver server
I0331 18:04:53.668942 32536 main.go:141] libmachine: Making call to close connection to plugin binary
I0331 18:04:53.821124 32536 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace to be "Ready" ...
I0331 18:04:54.218332 32536 pod_ready.go:92] pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:54.218358 32536 pod_ready.go:81] duration metric: took 397.210316ms waiting for pod "coredns-787d4945fb-hcrtc" in "kube-system" namespace to be "Ready" ...
I0331 18:04:54.218367 32536 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:54.618607 32536 pod_ready.go:92] pod "etcd-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:54.618631 32536 pod_ready.go:81] duration metric: took 400.255347ms waiting for pod "etcd-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:54.618640 32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:55.019356 32536 pod_ready.go:92] pod "kube-apiserver-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:55.019378 32536 pod_ready.go:81] duration metric: took 400.731414ms waiting for pod "kube-apiserver-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:55.019393 32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:55.420085 32536 pod_ready.go:92] pod "kube-controller-manager-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:55.420114 32536 pod_ready.go:81] duration metric: took 400.711919ms waiting for pod "kube-controller-manager-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:55.420130 32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
I0331 18:04:55.015443 33276 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.381792307s)
I0331 18:04:55.015525 33276 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0331 18:04:55.133415 33276 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0331 18:04:55.243506 33276 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0331 18:04:55.356452 33276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0331 18:04:55.477055 33276 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0331 18:04:55.493533 33276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0331 18:04:55.611643 33276 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I0331 18:04:55.707141 33276 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0331 18:04:55.707200 33276 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0331 18:04:55.713403 33276 start.go:549] Will wait 60s for crictl version
I0331 18:04:55.713474 33276 ssh_runner.go:195] Run: which crictl
I0331 18:04:55.718338 33276 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0331 18:04:55.774128 33276 start.go:565] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.23
RuntimeApiVersion: v1alpha2
I0331 18:04:55.774203 33276 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0331 18:04:55.810277 33276 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0331 18:04:55.819685 32536 pod_ready.go:92] pod "kube-proxy-jg8p6" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:55.819705 32536 pod_ready.go:81] duration metric: took 399.567435ms waiting for pod "kube-proxy-jg8p6" in "kube-system" namespace to be "Ready" ...
I0331 18:04:55.819719 32536 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:56.219488 32536 pod_ready.go:92] pod "kube-scheduler-pause-939189" in "kube-system" namespace has status "Ready":"True"
I0331 18:04:56.219513 32536 pod_ready.go:81] duration metric: took 399.783789ms waiting for pod "kube-scheduler-pause-939189" in "kube-system" namespace to be "Ready" ...
I0331 18:04:56.219524 32536 pod_ready.go:38] duration metric: took 2.582225755s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0331 18:04:56.219550 32536 api_server.go:51] waiting for apiserver process to appear ...
I0331 18:04:56.219595 32536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0331 18:04:56.240919 32536 api_server.go:71] duration metric: took 2.768070005s to wait for apiserver process to appear ...
I0331 18:04:56.240947 32536 api_server.go:87] waiting for apiserver healthz status ...
I0331 18:04:56.240961 32536 api_server.go:252] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
I0331 18:04:56.247401 32536 api_server.go:278] https://192.168.39.142:8443/healthz returned 200:
ok
I0331 18:04:56.248689 32536 api_server.go:140] control plane version: v1.26.3
I0331 18:04:56.248709 32536 api_server.go:130] duration metric: took 7.754551ms to wait for apiserver health ...
I0331 18:04:56.248718 32536 system_pods.go:43] waiting for kube-system pods to appear ...
I0331 18:04:56.422125 32536 system_pods.go:59] 6 kube-system pods found
I0331 18:04:56.422151 32536 system_pods.go:61] "coredns-787d4945fb-hcrtc" [1e78e1f9-1a39-4c02-a4e9-51e5b268d077] Running
I0331 18:04:56.422159 32536 system_pods.go:61] "etcd-pause-939189" [cdc68c44-f3a4-4655-9818-48f074e8e376] Running
I0331 18:04:56.422166 32536 system_pods.go:61] "kube-apiserver-pause-939189" [c40b018d-97b2-4cdf-9edc-e1473d304c55] Running
I0331 18:04:56.422174 32536 system_pods.go:61] "kube-controller-manager-pause-939189" [69a62fcf-5db8-4354-aa08-ee5d2209a0ed] Running
I0331 18:04:56.422181 32536 system_pods.go:61] "kube-proxy-jg8p6" [dd3378f4-948b-4bec-abd3-ea9dc35d3259] Running
I0331 18:04:56.422187 32536 system_pods.go:61] "kube-scheduler-pause-939189" [b51eb2f5-8508-46f2-8c02-652ad1a69a1e] Running
I0331 18:04:56.422193 32536 system_pods.go:74] duration metric: took 173.469145ms to wait for pod list to return data ...
I0331 18:04:56.422202 32536 default_sa.go:34] waiting for default service account to be created ...
I0331 18:04:56.618165 32536 default_sa.go:45] found service account: "default"
I0331 18:04:56.618190 32536 default_sa.go:55] duration metric: took 195.978567ms for default service account to be created ...
I0331 18:04:56.618200 32536 system_pods.go:116] waiting for k8s-apps to be running ...
I0331 18:04:56.823045 32536 system_pods.go:86] 6 kube-system pods found
I0331 18:04:56.823082 32536 system_pods.go:89] "coredns-787d4945fb-hcrtc" [1e78e1f9-1a39-4c02-a4e9-51e5b268d077] Running
I0331 18:04:56.823092 32536 system_pods.go:89] "etcd-pause-939189" [cdc68c44-f3a4-4655-9818-48f074e8e376] Running
I0331 18:04:56.823099 32536 system_pods.go:89] "kube-apiserver-pause-939189" [c40b018d-97b2-4cdf-9edc-e1473d304c55] Running
I0331 18:04:56.823107 32536 system_pods.go:89] "kube-controller-manager-pause-939189" [69a62fcf-5db8-4354-aa08-ee5d2209a0ed] Running
I0331 18:04:56.823113 32536 system_pods.go:89] "kube-proxy-jg8p6" [dd3378f4-948b-4bec-abd3-ea9dc35d3259] Running
I0331 18:04:56.823120 32536 system_pods.go:89] "kube-scheduler-pause-939189" [b51eb2f5-8508-46f2-8c02-652ad1a69a1e] Running
I0331 18:04:56.823129 32536 system_pods.go:126] duration metric: took 204.923041ms to wait for k8s-apps to be running ...
I0331 18:04:56.823144 32536 system_svc.go:44] waiting for kubelet service to be running ....
I0331 18:04:56.823194 32536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0331 18:04:56.843108 32536 system_svc.go:56] duration metric: took 19.952106ms WaitForService to wait for kubelet.
I0331 18:04:56.843157 32536 kubeadm.go:578] duration metric: took 3.370313636s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0331 18:04:56.843181 32536 node_conditions.go:102] verifying NodePressure condition ...
I0331 18:04:57.019150 32536 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0331 18:04:57.019178 32536 node_conditions.go:123] node cpu capacity is 2
I0331 18:04:57.019188 32536 node_conditions.go:105] duration metric: took 176.00176ms to run NodePressure ...
I0331 18:04:57.019201 32536 start.go:228] waiting for startup goroutines ...
I0331 18:04:57.019209 32536 start.go:233] waiting for cluster config update ...
I0331 18:04:57.019219 32536 start.go:242] writing updated cluster config ...
I0331 18:04:57.019587 32536 ssh_runner.go:195] Run: rm -f paused
I0331 18:04:57.094738 32536 start.go:557] kubectl: 1.26.3, cluster: 1.26.3 (minor skew: 0)
I0331 18:04:57.097707 32536 out.go:177] * Done! kubectl is now configured to use "pause-939189" cluster and "default" namespace by default
I0331 18:04:52.670594 33820 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime docker
W0331 18:04:52.706864 33820 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-docker-overlay2-amd64.tar.lz4 status code: 404
I0331 18:04:52.707029 33820 profile.go:148] Saving config to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/NoKubernetes-746317/config.json ...
I0331 18:04:52.707063 33820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/NoKubernetes-746317/config.json: {Name:mkc819cfb6c45ebbebd0d82f4a0be54fd6cd98e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0331 18:04:52.707228 33820 cache.go:193] Successfully downloaded all kic artifacts
I0331 18:04:52.707251 33820 start.go:364] acquiring machines lock for NoKubernetes-746317: {Name:mkfdc5208de17d93700ea90324b4f36214eab469 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0331 18:04:55.847800 33276 out.go:204] * Preparing Kubernetes v1.26.3 on Docker 20.10.23 ...
I0331 18:04:55.847864 33276 main.go:141] libmachine: (auto-347180) Calling .GetIP
I0331 18:04:55.850787 33276 main.go:141] libmachine: (auto-347180) DBG | domain auto-347180 has defined MAC address 52:54:00:61:01:e7 in network mk-auto-347180
I0331 18:04:55.851207 33276 main.go:141] libmachine: (auto-347180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:01:e7", ip: ""} in network mk-auto-347180: {Iface:virbr3 ExpiryTime:2023-03-31 19:04:35 +0000 UTC Type:0 Mac:52:54:00:61:01:e7 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:auto-347180 Clientid:01:52:54:00:61:01:e7}
I0331 18:04:55.851239 33276 main.go:141] libmachine: (auto-347180) DBG | domain auto-347180 has defined IP address 192.168.72.199 and MAC address 52:54:00:61:01:e7 in network mk-auto-347180
I0331 18:04:55.851415 33276 ssh_runner.go:195] Run: grep 192.168.72.1 host.minikube.internal$ /etc/hosts
I0331 18:04:55.855857 33276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0331 18:04:55.868328 33276 localpath.go:92] copying /home/jenkins/minikube-integration/16144-3494/.minikube/client.crt -> /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.crt
I0331 18:04:55.868487 33276 localpath.go:117] copying /home/jenkins/minikube-integration/16144-3494/.minikube/client.key -> /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.key
I0331 18:04:55.868617 33276 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
I0331 18:04:55.868673 33276 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0331 18:04:55.896702 33276 docker.go:639] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.3
registry.k8s.io/kube-scheduler:v1.26.3
registry.k8s.io/kube-controller-manager:v1.26.3
registry.k8s.io/kube-proxy:v1.26.3
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0331 18:04:55.896733 33276 docker.go:569] Images already preloaded, skipping extraction
I0331 18:04:55.896797 33276 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0331 18:04:55.924955 33276 docker.go:639] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.3
registry.k8s.io/kube-controller-manager:v1.26.3
registry.k8s.io/kube-scheduler:v1.26.3
registry.k8s.io/kube-proxy:v1.26.3
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0331 18:04:55.924992 33276 cache_images.go:84] Images are preloaded, skipping loading
I0331 18:04:55.925053 33276 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0331 18:04:55.965144 33276 cni.go:84] Creating CNI manager for ""
I0331 18:04:55.965172 33276 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0331 18:04:55.965185 33276 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0331 18:04:55.965205 33276 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.199 APIServerPort:8443 KubernetesVersion:v1.26.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-347180 NodeName:auto-347180 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0331 18:04:55.965393 33276 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.72.199
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "auto-347180"
kubeletExtraArgs:
node-ip: 192.168.72.199
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.72.199"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0331 18:04:55.965514 33276 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=auto-347180 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.199
[Install]
config:
{KubernetesVersion:v1.26.3 ClusterName:auto-347180 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0331 18:04:55.965613 33276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.3
I0331 18:04:55.975410 33276 binaries.go:44] Found k8s binaries, skipping transfer
I0331 18:04:55.975480 33276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0331 18:04:55.984755 33276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
I0331 18:04:56.009787 33276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0331 18:04:56.031312 33276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
I0331 18:04:56.049714 33276 ssh_runner.go:195] Run: grep 192.168.72.199 control-plane.minikube.internal$ /etc/hosts
I0331 18:04:56.054641 33276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.199 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0331 18:04:56.067876 33276 certs.go:56] Setting up /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180 for IP: 192.168.72.199
I0331 18:04:56.067912 33276 certs.go:186] acquiring lock for shared ca certs: {Name:mk5b2b979756b4a682c5be81dc53f006bb9a7a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0331 18:04:56.068110 33276 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16144-3494/.minikube/ca.key
I0331 18:04:56.068167 33276 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16144-3494/.minikube/proxy-client-ca.key
I0331 18:04:56.068278 33276 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/client.key
I0331 18:04:56.068308 33276 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key.217b3e23
I0331 18:04:56.068325 33276 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt.217b3e23 with IP's: [192.168.72.199 10.96.0.1 127.0.0.1 10.0.0.1]
I0331 18:04:56.209196 33276 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt.217b3e23 ...
I0331 18:04:56.209224 33276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt.217b3e23: {Name:mk3e4cd47c6706ab2f578dfdd08d80ebdd3c15fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0331 18:04:56.209429 33276 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key.217b3e23 ...
I0331 18:04:56.209445 33276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key.217b3e23: {Name:mk009817638857b2bbdb66530e778b671a0003f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0331 18:04:56.209547 33276 certs.go:333] copying /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt.217b3e23 -> /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt
I0331 18:04:56.209609 33276 certs.go:337] copying /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key.217b3e23 -> /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key
I0331 18:04:56.209656 33276 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.key
I0331 18:04:56.209668 33276 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.crt with IP's: []
I0331 18:04:56.257382 33276 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.crt ...
I0331 18:04:56.257405 33276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.crt: {Name:mk082703dadea0ea3251f4202bbf72399caa3a43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0331 18:04:56.257583 33276 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.key ...
I0331 18:04:56.257595 33276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.key: {Name:mk4b72bffb94c8b27e86fc5f7b2d38af391fe2ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0331 18:04:56.257819 33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/10540.pem (1338 bytes)
W0331 18:04:56.257876 33276 certs.go:397] ignoring /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/10540_empty.pem, impossibly tiny 0 bytes
I0331 18:04:56.257892 33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca-key.pem (1675 bytes)
I0331 18:04:56.257924 33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/ca.pem (1078 bytes)
I0331 18:04:56.257959 33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/cert.pem (1123 bytes)
I0331 18:04:56.257987 33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/certs/home/jenkins/minikube-integration/16144-3494/.minikube/certs/key.pem (1679 bytes)
I0331 18:04:56.258026 33276 certs.go:401] found cert: /home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/ssl/certs/105402.pem (1708 bytes)
I0331 18:04:56.258526 33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0331 18:04:56.287806 33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0331 18:04:56.314968 33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0331 18:04:56.338082 33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/profiles/auto-347180/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0331 18:04:56.360708 33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0331 18:04:56.390138 33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0331 18:04:56.419129 33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0331 18:04:56.447101 33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0331 18:04:56.472169 33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/files/etc/ssl/certs/105402.pem --> /usr/share/ca-certificates/105402.pem (1708 bytes)
I0331 18:04:56.498664 33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0331 18:04:56.525516 33276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16144-3494/.minikube/certs/10540.pem --> /usr/share/ca-certificates/10540.pem (1338 bytes)
I0331 18:04:56.548806 33276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0331 18:04:56.565642 33276 ssh_runner.go:195] Run: openssl version
I0331 18:04:56.571067 33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105402.pem && ln -fs /usr/share/ca-certificates/105402.pem /etc/ssl/certs/105402.pem"
I0331 18:04:56.580624 33276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105402.pem
I0331 18:04:56.585385 33276 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 31 17:26 /usr/share/ca-certificates/105402.pem
I0331 18:04:56.585449 33276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105402.pem
I0331 18:04:56.591662 33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105402.pem /etc/ssl/certs/3ec20f2e.0"
I0331 18:04:56.602558 33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0331 18:04:56.612933 33276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0331 18:04:56.619029 33276 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 31 17:21 /usr/share/ca-certificates/minikubeCA.pem
I0331 18:04:56.619087 33276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0331 18:04:56.626198 33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0331 18:04:56.639266 33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10540.pem && ln -fs /usr/share/ca-certificates/10540.pem /etc/ssl/certs/10540.pem"
I0331 18:04:56.649914 33276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10540.pem
I0331 18:04:56.654454 33276 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 31 17:26 /usr/share/ca-certificates/10540.pem
I0331 18:04:56.654515 33276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10540.pem
I0331 18:04:56.661570 33276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10540.pem /etc/ssl/certs/51391683.0"
I0331 18:04:56.671169 33276 kubeadm.go:401] StartCluster: {Name:auto-347180 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16191/minikube-v1.29.0-1680115329-16191-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:auto-347
180 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0331 18:04:56.671303 33276 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0331 18:04:56.695923 33276 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0331 18:04:56.705641 33276 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0331 18:04:56.715247 33276 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0331 18:04:56.724602 33276 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0331 18:04:56.724655 33276 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0331 18:04:56.783971 33276 kubeadm.go:322] [init] Using Kubernetes version: v1.26.3
I0331 18:04:56.784098 33276 kubeadm.go:322] [preflight] Running pre-flight checks
I0331 18:04:56.929895 33276 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0331 18:04:56.930047 33276 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0331 18:04:56.930171 33276 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0331 18:04:57.156879 33276 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0331 18:04:54.390483 33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
I0331 18:04:54.390970 33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
I0331 18:04:54.390985 33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:54.390921 33603 retry.go:31] will retry after 1.935547s: waiting for machine to come up
I0331 18:04:56.329196 33494 main.go:141] libmachine: (gvisor-836132) DBG | domain gvisor-836132 has defined MAC address 52:54:00:99:c5:e3 in network mk-gvisor-836132
I0331 18:04:56.329773 33494 main.go:141] libmachine: (gvisor-836132) DBG | unable to find current IP address of domain gvisor-836132 in network mk-gvisor-836132
I0331 18:04:56.329792 33494 main.go:141] libmachine: (gvisor-836132) DBG | I0331 18:04:56.329712 33603 retry.go:31] will retry after 2.673868459s: waiting for machine to come up
I0331 18:04:57.159756 33276 out.go:204] - Generating certificates and keys ...
I0331 18:04:57.159894 33276 kubeadm.go:322] [certs] Using existing ca certificate authority
I0331 18:04:57.159974 33276 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0331 18:04:57.249986 33276 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0331 18:04:57.520865 33276 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0331 18:04:58.125540 33276 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0331 18:04:58.484579 33276 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0331 18:04:58.862388 33276 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0331 18:04:58.862887 33276 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [auto-347180 localhost] and IPs [192.168.72.199 127.0.0.1 ::1]
*
* ==> Docker <==
* -- Journal begins at Fri 2023-03-31 18:02:19 UTC, ends at Fri 2023-03-31 18:04:59 UTC. --
Mar 31 18:04:32 pause-939189 dockerd[4567]: time="2023-03-31T18:04:32.708286788Z" level=warning msg="cleaning up after shim disconnected" id=b400c024f135f7c82274f810b9ce06d15d41eb95e87b7caae02c5db9542e56db namespace=moby
Mar 31 18:04:32 pause-939189 dockerd[4567]: time="2023-03-31T18:04:32.708340669Z" level=info msg="cleaning up dead shim" namespace=moby
Mar 31 18:04:32 pause-939189 cri-dockerd[5345]: W0331 18:04:32.836659 5345 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.348379648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.348500345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.348521902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.348533652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.357176945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.357265075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.357291341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 31 18:04:35 pause-939189 dockerd[4567]: time="2023-03-31T18:04:35.357305204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 31 18:04:39 pause-939189 cri-dockerd[5345]: time="2023-03-31T18:04:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.947465780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.947526265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.947543565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.947555826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.953976070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.954296632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.954453909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 31 18:04:40 pause-939189 dockerd[4567]: time="2023-03-31T18:04:40.954623054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 31 18:04:41 pause-939189 cri-dockerd[5345]: time="2023-03-31T18:04:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/11bb612576207ce6f9fdbde8dfa7f6235a96c8d3be559f2e51d8d4b173aa4b51/resolv.conf as [nameserver 192.168.122.1]"
Mar 31 18:04:41 pause-939189 dockerd[4567]: time="2023-03-31T18:04:41.977346347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 31 18:04:41 pause-939189 dockerd[4567]: time="2023-03-31T18:04:41.977635522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 31 18:04:41 pause-939189 dockerd[4567]: time="2023-03-31T18:04:41.977752683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 31 18:04:41 pause-939189 dockerd[4567]: time="2023-03-31T18:04:41.977778301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
1344b5c000a9d 5185b96f0becf 18 seconds ago Running coredns 2 11bb612576207
1686d0df28f10 92ed2bec97a63 19 seconds ago Running kube-proxy 3 18b52638ab7a1
5d40b2ef4a864 5a79047369329 24 seconds ago Running kube-scheduler 3 df301869b351d
80b600760e999 fce326961ae2d 24 seconds ago Running etcd 3 1089f600d6711
84de5d76d35ca ce8c2293ef09c 28 seconds ago Running kube-controller-manager 2 55c3c7ee9ca0a
966b1cd3b351e 1d9b3cbae03ce 30 seconds ago Running kube-apiserver 2 0afb944a4f151
a0ad0a35a3e08 fce326961ae2d 45 seconds ago Exited etcd 2 c447bce0c8aef
b4599f5bff86d 5a79047369329 45 seconds ago Exited kube-scheduler 2 6981b4d73a6c9
9999f58d27656 92ed2bec97a63 47 seconds ago Exited kube-proxy 2 f5b35d44675c8
b400c024f135f 5185b96f0becf About a minute ago Exited coredns 1 5e8b08d2a8f2f
874fcc56f9f62 1d9b3cbae03ce About a minute ago Exited kube-apiserver 1 4045aa0f265a1
8ace7d6c4bee4 ce8c2293ef09c About a minute ago Exited kube-controller-manager 1 b034146fe7e8c
*
* ==> coredns [1344b5c000a9] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] 127.0.0.1:58096 - 62967 "HINFO IN 3459962459257687508.4367275231804161359. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020935271s
*
* ==> coredns [b400c024f135] <==
* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] 127.0.0.1:42721 - 9088 "HINFO IN 8560628874867663181.8710474958470687856. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.051252273s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
*
* ==> describe nodes <==
* Name: pause-939189
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-939189
kubernetes.io/os=linux
minikube.k8s.io/commit=945b3fc45ee9ac8e1ceaffb00a71ec22c717b10e
minikube.k8s.io/name=pause-939189
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_03_31T18_03_00_0700
minikube.k8s.io/version=v1.29.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 31 Mar 2023 18:02:56 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-939189
AcquireTime: <unset>
RenewTime: Fri, 31 Mar 2023 18:04:59 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 31 Mar 2023 18:04:39 +0000 Fri, 31 Mar 2023 18:02:53 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 31 Mar 2023 18:04:39 +0000 Fri, 31 Mar 2023 18:02:53 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 31 Mar 2023 18:04:39 +0000 Fri, 31 Mar 2023 18:02:53 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 31 Mar 2023 18:04:39 +0000 Fri, 31 Mar 2023 18:03:00 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.142
Hostname: pause-939189
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017420Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017420Ki
pods: 110
System Info:
Machine ID: ff362cba6608463787695edbccc756af
System UUID: ff362cba-6608-4637-8769-5edbccc756af
Boot ID: 8edfbfeb-24ea-46a9-b4c5-e31dc2d1b4c1
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.23
Kubelet Version: v1.26.3
Kube-Proxy Version: v1.26.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-787d4945fb-hcrtc 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 108s
kube-system etcd-pause-939189 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 2m
kube-system kube-apiserver-pause-939189 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m3s
kube-system kube-controller-manager-pause-939189 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m
kube-system kube-proxy-jg8p6 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 108s
kube-system kube-scheduler-pause-939189 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 104s kube-proxy
Normal Starting 18s kube-proxy
Normal NodeHasSufficientMemory 2m10s (x4 over 2m10s) kubelet Node pause-939189 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m10s (x4 over 2m10s) kubelet Node pause-939189 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m10s (x4 over 2m10s) kubelet Node pause-939189 status is now: NodeHasSufficientPID
Normal NodeHasSufficientPID 2m kubelet Node pause-939189 status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 2m kubelet Node pause-939189 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m kubelet Node pause-939189 status is now: NodeHasNoDiskPressure
Normal NodeAllocatableEnforced 2m kubelet Updated Node Allocatable limit across pods
Normal NodeReady 2m kubelet Node pause-939189 status is now: NodeReady
Normal Starting 2m kubelet Starting kubelet.
Normal RegisteredNode 109s node-controller Node pause-939189 event: Registered Node pause-939189 in Controller
Normal Starting 26s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 26s (x8 over 26s) kubelet Node pause-939189 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 26s (x8 over 26s) kubelet Node pause-939189 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 26s (x7 over 26s) kubelet Node pause-939189 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 26s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 8s node-controller Node pause-939189 event: Registered Node pause-939189 in Controller
*
* ==> dmesg <==
* [ +0.422579] systemd-fstab-generator[930]: Ignoring "noauto" for root device
[ +0.164482] systemd-fstab-generator[941]: Ignoring "noauto" for root device
[ +0.161981] systemd-fstab-generator[954]: Ignoring "noauto" for root device
[ +1.600832] systemd-fstab-generator[1102]: Ignoring "noauto" for root device
[ +0.111337] systemd-fstab-generator[1113]: Ignoring "noauto" for root device
[ +0.130984] systemd-fstab-generator[1124]: Ignoring "noauto" for root device
[ +0.124503] systemd-fstab-generator[1135]: Ignoring "noauto" for root device
[ +0.132321] systemd-fstab-generator[1149]: Ignoring "noauto" for root device
[ +4.351511] systemd-fstab-generator[1397]: Ignoring "noauto" for root device
[ +0.702241] kauditd_printk_skb: 68 callbacks suppressed
[ +9.105596] systemd-fstab-generator[2340]: Ignoring "noauto" for root device
[Mar31 18:03] kauditd_printk_skb: 8 callbacks suppressed
[ +5.099775] kauditd_printk_skb: 28 callbacks suppressed
[ +22.013414] systemd-fstab-generator[3826]: Ignoring "noauto" for root device
[ +0.416829] systemd-fstab-generator[3860]: Ignoring "noauto" for root device
[ +0.213956] systemd-fstab-generator[3871]: Ignoring "noauto" for root device
[ +0.230022] systemd-fstab-generator[3884]: Ignoring "noauto" for root device
[ +5.258034] kauditd_printk_skb: 4 callbacks suppressed
[ +6.349775] systemd-fstab-generator[4980]: Ignoring "noauto" for root device
[ +0.138234] systemd-fstab-generator[4991]: Ignoring "noauto" for root device
[ +0.169296] systemd-fstab-generator[5007]: Ignoring "noauto" for root device
[ +0.160988] systemd-fstab-generator[5056]: Ignoring "noauto" for root device
[ +0.226282] systemd-fstab-generator[5127]: Ignoring "noauto" for root device
[ +4.119790] kauditd_printk_skb: 37 callbacks suppressed
[Mar31 18:04] systemd-fstab-generator[7161]: Ignoring "noauto" for root device
*
* ==> etcd [80b600760e99] <==
* {"level":"warn","ts":"2023-03-31T18:04:50.753Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-03-31T18:04:50.314Z","time spent":"439.098122ms","remote":"127.0.0.1:52040","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6620,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-939189\" mod_revision:461 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-939189\" value_size:6558 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-939189\" > >"}
{"level":"warn","ts":"2023-03-31T18:04:50.754Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"212.221672ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:341"}
{"level":"info","ts":"2023-03-31T18:04:50.754Z","caller":"traceutil/trace.go:171","msg":"trace[1823280090] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:462; }","duration":"212.395865ms","start":"2023-03-31T18:04:50.542Z","end":"2023-03-31T18:04:50.754Z","steps":["trace[1823280090] 'agreement among raft nodes before linearized reading' (duration: 212.138709ms)"],"step_count":1}
{"level":"warn","ts":"2023-03-31T18:04:50.754Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"341.184734ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-939189\" ","response":"range_response_count:1 size:5480"}
{"level":"info","ts":"2023-03-31T18:04:50.754Z","caller":"traceutil/trace.go:171","msg":"trace[1705229913] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-939189; range_end:; response_count:1; response_revision:462; }","duration":"341.208794ms","start":"2023-03-31T18:04:50.413Z","end":"2023-03-31T18:04:50.754Z","steps":["trace[1705229913] 'agreement among raft nodes before linearized reading' (duration: 341.128291ms)"],"step_count":1}
{"level":"warn","ts":"2023-03-31T18:04:50.754Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-03-31T18:04:50.413Z","time spent":"341.245678ms","remote":"127.0.0.1:52040","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5504,"request content":"key:\"/registry/pods/kube-system/etcd-pause-939189\" "}
{"level":"warn","ts":"2023-03-31T18:04:51.208Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"258.605359ms","expected-duration":"100ms","prefix":"","request":"header:<ID:839788533735404794 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:0ba78738d7beb4f9>","response":"size:41"}
{"level":"info","ts":"2023-03-31T18:04:51.209Z","caller":"traceutil/trace.go:171","msg":"trace[2128410207] linearizableReadLoop","detail":"{readStateIndex:500; appliedIndex:499; }","duration":"296.499176ms","start":"2023-03-31T18:04:50.912Z","end":"2023-03-31T18:04:51.209Z","steps":["trace[2128410207] 'read index received' (duration: 37.740315ms)","trace[2128410207] 'applied index is now lower than readState.Index' (duration: 258.757557ms)"],"step_count":2}
{"level":"warn","ts":"2023-03-31T18:04:51.209Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"296.647465ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-939189\" ","response":"range_response_count:1 size:5480"}
{"level":"info","ts":"2023-03-31T18:04:51.209Z","caller":"traceutil/trace.go:171","msg":"trace[478960090] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-939189; range_end:; response_count:1; response_revision:462; }","duration":"296.673964ms","start":"2023-03-31T18:04:50.912Z","end":"2023-03-31T18:04:51.209Z","steps":["trace[478960090] 'agreement among raft nodes before linearized reading' (duration: 296.561324ms)"],"step_count":1}
{"level":"warn","ts":"2023-03-31T18:04:51.209Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-03-31T18:04:50.762Z","time spent":"447.271669ms","remote":"127.0.0.1:52016","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
{"level":"info","ts":"2023-03-31T18:04:52.108Z","caller":"traceutil/trace.go:171","msg":"trace[1920228168] linearizableReadLoop","detail":"{readStateIndex:502; appliedIndex:501; }","duration":"165.267816ms","start":"2023-03-31T18:04:51.943Z","end":"2023-03-31T18:04:52.108Z","steps":["trace[1920228168] 'read index received' (duration: 165.022721ms)","trace[1920228168] 'applied index is now lower than readState.Index' (duration: 244.277µs)"],"step_count":2}
{"level":"info","ts":"2023-03-31T18:04:52.110Z","caller":"traceutil/trace.go:171","msg":"trace[1687701317] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"176.741493ms","start":"2023-03-31T18:04:51.933Z","end":"2023-03-31T18:04:52.110Z","steps":["trace[1687701317] 'process raft request' (duration: 175.168227ms)"],"step_count":1}
{"level":"warn","ts":"2023-03-31T18:04:52.112Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"168.992818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
{"level":"info","ts":"2023-03-31T18:04:52.112Z","caller":"traceutil/trace.go:171","msg":"trace[1794617064] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:464; }","duration":"169.069396ms","start":"2023-03-31T18:04:51.943Z","end":"2023-03-31T18:04:52.112Z","steps":["trace[1794617064] 'agreement among raft nodes before linearized reading' (duration: 165.391165ms)"],"step_count":1}
{"level":"warn","ts":"2023-03-31T18:04:52.293Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"123.74239ms","expected-duration":"100ms","prefix":"","request":"header:<ID:839788533735404827 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-f9qtf\" mod_revision:390 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-f9qtf\" value_size:1239 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-f9qtf\" > >>","response":"size:16"}
{"level":"info","ts":"2023-03-31T18:04:52.294Z","caller":"traceutil/trace.go:171","msg":"trace[280136650] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"168.841202ms","start":"2023-03-31T18:04:52.125Z","end":"2023-03-31T18:04:52.294Z","steps":["trace[280136650] 'process raft request' (duration: 44.44482ms)","trace[280136650] 'compare' (duration: 123.644413ms)"],"step_count":2}
{"level":"info","ts":"2023-03-31T18:04:52.297Z","caller":"traceutil/trace.go:171","msg":"trace[929692375] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"142.41231ms","start":"2023-03-31T18:04:52.154Z","end":"2023-03-31T18:04:52.297Z","steps":["trace[929692375] 'process raft request' (duration: 142.313651ms)"],"step_count":1}
{"level":"info","ts":"2023-03-31T18:04:52.298Z","caller":"traceutil/trace.go:171","msg":"trace[1640521255] transaction","detail":"{read_only:false; response_revision:466; number_of_response:1; }","duration":"169.933179ms","start":"2023-03-31T18:04:52.128Z","end":"2023-03-31T18:04:52.298Z","steps":["trace[1640521255] 'process raft request' (duration: 168.949367ms)"],"step_count":1}
{"level":"info","ts":"2023-03-31T18:04:52.583Z","caller":"traceutil/trace.go:171","msg":"trace[1929288585] linearizableReadLoop","detail":"{readStateIndex:506; appliedIndex:505; }","duration":"170.211991ms","start":"2023-03-31T18:04:52.412Z","end":"2023-03-31T18:04:52.583Z","steps":["trace[1929288585] 'read index received' (duration: 128.7627ms)","trace[1929288585] 'applied index is now lower than readState.Index' (duration: 41.448583ms)"],"step_count":2}
{"level":"info","ts":"2023-03-31T18:04:52.583Z","caller":"traceutil/trace.go:171","msg":"trace[47408908] transaction","detail":"{read_only:false; response_revision:468; number_of_response:1; }","duration":"258.75753ms","start":"2023-03-31T18:04:52.324Z","end":"2023-03-31T18:04:52.583Z","steps":["trace[47408908] 'process raft request' (duration: 216.820717ms)","trace[47408908] 'compare' (duration: 41.26405ms)"],"step_count":2}
{"level":"warn","ts":"2023-03-31T18:04:52.584Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"171.519483ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-939189\" ","response":"range_response_count:1 size:5480"}
{"level":"info","ts":"2023-03-31T18:04:52.584Z","caller":"traceutil/trace.go:171","msg":"trace[1263506650] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-939189; range_end:; response_count:1; response_revision:468; }","duration":"171.595141ms","start":"2023-03-31T18:04:52.412Z","end":"2023-03-31T18:04:52.584Z","steps":["trace[1263506650] 'agreement among raft nodes before linearized reading' (duration: 171.444814ms)"],"step_count":1}
{"level":"warn","ts":"2023-03-31T18:04:52.584Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"150.725144ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2023-03-31T18:04:52.585Z","caller":"traceutil/trace.go:171","msg":"trace[213446996] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:468; }","duration":"150.795214ms","start":"2023-03-31T18:04:52.434Z","end":"2023-03-31T18:04:52.584Z","steps":["trace[213446996] 'agreement among raft nodes before linearized reading' (duration: 150.635678ms)"],"step_count":1}
*
* ==> etcd [a0ad0a35a3e0] <==
* {"level":"info","ts":"2023-03-31T18:04:14.959Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.39.142:2380"}
{"level":"info","ts":"2023-03-31T18:04:14.959Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.39.142:2380"}
{"level":"info","ts":"2023-03-31T18:04:14.959Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-03-31T18:04:14.962Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"d7a5d3e20a6b0ba7","initial-advertise-peer-urls":["https://192.168.39.142:2380"],"listen-peer-urls":["https://192.168.39.142:2380"],"advertise-client-urls":["https://192.168.39.142:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.142:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-03-31T18:04:14.962Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 is starting a new election at term 3"}
{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 became pre-candidate at term 3"}
{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 received MsgPreVoteResp from d7a5d3e20a6b0ba7 at term 3"}
{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 became candidate at term 4"}
{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 received MsgVoteResp from d7a5d3e20a6b0ba7 at term 4"}
{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 became leader at term 4"}
{"level":"info","ts":"2023-03-31T18:04:15.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d7a5d3e20a6b0ba7 elected leader d7a5d3e20a6b0ba7 at term 4"}
{"level":"info","ts":"2023-03-31T18:04:15.341Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"d7a5d3e20a6b0ba7","local-member-attributes":"{Name:pause-939189 ClientURLs:[https://192.168.39.142:2379]}","request-path":"/0/members/d7a5d3e20a6b0ba7/attributes","cluster-id":"f7d6b5428c0c9dc0","publish-timeout":"7s"}
{"level":"info","ts":"2023-03-31T18:04:15.341Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-31T18:04:15.342Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-03-31T18:04:15.342Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-31T18:04:15.343Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.142:2379"}
{"level":"info","ts":"2023-03-31T18:04:15.347Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-03-31T18:04:15.347Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-03-31T18:04:27.719Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2023-03-31T18:04:27.719Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-939189","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.142:2380"],"advertise-client-urls":["https://192.168.39.142:2379"]}
{"level":"info","ts":"2023-03-31T18:04:27.723Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d7a5d3e20a6b0ba7","current-leader-member-id":"d7a5d3e20a6b0ba7"}
{"level":"info","ts":"2023-03-31T18:04:27.727Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.39.142:2380"}
{"level":"info","ts":"2023-03-31T18:04:27.728Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.39.142:2380"}
{"level":"info","ts":"2023-03-31T18:04:27.728Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-939189","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.142:2380"],"advertise-client-urls":["https://192.168.39.142:2379"]}
*
* ==> kernel <==
* 18:05:00 up 2 min, 0 users, load average: 2.10, 1.02, 0.39
Linux pause-939189 5.10.57 #1 SMP Wed Mar 29 23:38:32 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [874fcc56f9f6] <==
* W0331 18:04:09.094355 1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0331 18:04:10.570941 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0331 18:04:14.640331 1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
E0331 18:04:19.527936 1 run.go:74] "command failed" err="context deadline exceeded"
*
* ==> kube-apiserver [966b1cd3b351] <==
* I0331 18:04:39.222688 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0331 18:04:39.205515 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
I0331 18:04:39.314255 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
I0331 18:04:39.316506 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0331 18:04:39.317062 1 shared_informer.go:280] Caches are synced for configmaps
I0331 18:04:39.318946 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0331 18:04:39.323304 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0331 18:04:39.338800 1 apf_controller.go:366] Running API Priority and Fairness config worker
I0331 18:04:39.338942 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
I0331 18:04:39.339358 1 shared_informer.go:280] Caches are synced for crd-autoregister
I0331 18:04:39.397474 1 shared_informer.go:280] Caches are synced for node_authorizer
I0331 18:04:39.418720 1 cache.go:39] Caches are synced for autoregister controller
I0331 18:04:39.958002 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0331 18:04:40.221547 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0331 18:04:41.099152 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0331 18:04:41.124185 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0331 18:04:41.212998 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0331 18:04:41.267710 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0331 18:04:41.286487 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0331 18:04:51.284113 1 trace.go:219] Trace[2025945949]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.142,type:*v1.Endpoints,resource:apiServerIPInfo (31-Mar-2023 18:04:50.760) (total time: 523ms):
Trace[2025945949]: ---"Transaction prepared" 449ms (18:04:51.210)
Trace[2025945949]: ---"Txn call completed" 73ms (18:04:51.284)
Trace[2025945949]: [523.960493ms] [523.960493ms] END
I0331 18:04:51.929561 1 controller.go:615] quota admission added evaluator for: endpoints
I0331 18:04:52.124697 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
*
* ==> kube-controller-manager [84de5d76d35c] <==
* W0331 18:04:52.065251 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="pause-939189" does not exist
I0331 18:04:52.067639 1 shared_informer.go:280] Caches are synced for resource quota
I0331 18:04:52.076620 1 shared_informer.go:280] Caches are synced for attach detach
I0331 18:04:52.084564 1 shared_informer.go:280] Caches are synced for daemon sets
I0331 18:04:52.087592 1 shared_informer.go:280] Caches are synced for endpoint_slice
I0331 18:04:52.100706 1 shared_informer.go:280] Caches are synced for node
I0331 18:04:52.100905 1 range_allocator.go:167] Sending events to api server.
I0331 18:04:52.101097 1 range_allocator.go:171] Starting range CIDR allocator
I0331 18:04:52.101132 1 shared_informer.go:273] Waiting for caches to sync for cidrallocator
I0331 18:04:52.101145 1 shared_informer.go:280] Caches are synced for cidrallocator
I0331 18:04:52.109512 1 shared_informer.go:280] Caches are synced for GC
I0331 18:04:52.110949 1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring
I0331 18:04:52.111820 1 shared_informer.go:280] Caches are synced for resource quota
I0331 18:04:52.151113 1 shared_informer.go:280] Caches are synced for taint
I0331 18:04:52.151644 1 shared_informer.go:280] Caches are synced for TTL
I0331 18:04:52.151696 1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone:
W0331 18:04:52.152283 1 node_lifecycle_controller.go:1053] Missing timestamp for Node pause-939189. Assuming now as a timestamp.
I0331 18:04:52.152564 1 node_lifecycle_controller.go:1254] Controller detected that zone is now in state Normal.
I0331 18:04:52.152806 1 taint_manager.go:206] "Starting NoExecuteTaintManager"
I0331 18:04:52.153068 1 taint_manager.go:211] "Sending events to api server"
I0331 18:04:52.154301 1 event.go:294] "Event occurred" object="pause-939189" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-939189 event: Registered Node pause-939189 in Controller"
I0331 18:04:52.157444 1 shared_informer.go:280] Caches are synced for persistent volume
I0331 18:04:52.506059 1 shared_informer.go:280] Caches are synced for garbage collector
I0331 18:04:52.506479 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0331 18:04:52.533136 1 shared_informer.go:280] Caches are synced for garbage collector
*
* ==> kube-controller-manager [8ace7d6c4bee] <==
* I0331 18:03:59.321744 1 serving.go:348] Generated self-signed cert in-memory
I0331 18:03:59.853937 1 controllermanager.go:182] Version: v1.26.3
I0331 18:03:59.853990 1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0331 18:03:59.855979 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0331 18:03:59.856127 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0331 18:03:59.856668 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0331 18:03:59.856802 1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
F0331 18:04:20.535428 1 controllermanager.go:228] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
*
* ==> kube-proxy [1686d0df28f1] <==
* I0331 18:04:41.170371 1 node.go:163] Successfully retrieved node IP: 192.168.39.142
I0331 18:04:41.170425 1 server_others.go:109] "Detected node IP" address="192.168.39.142"
I0331 18:04:41.170450 1 server_others.go:535] "Using iptables proxy"
I0331 18:04:41.271349 1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0331 18:04:41.271390 1 server_others.go:176] "Using iptables Proxier"
I0331 18:04:41.271446 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0331 18:04:41.271898 1 server.go:655] "Version info" version="v1.26.3"
I0331 18:04:41.271978 1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0331 18:04:41.276289 1 config.go:317] "Starting service config controller"
I0331 18:04:41.276432 1 shared_informer.go:273] Waiting for caches to sync for service config
I0331 18:04:41.276461 1 config.go:226] "Starting endpoint slice config controller"
I0331 18:04:41.276465 1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
I0331 18:04:41.277123 1 config.go:444] "Starting node config controller"
I0331 18:04:41.277131 1 shared_informer.go:273] Waiting for caches to sync for node config
I0331 18:04:41.376963 1 shared_informer.go:280] Caches are synced for endpoint slice config
I0331 18:04:41.377002 1 shared_informer.go:280] Caches are synced for service config
I0331 18:04:41.377248 1 shared_informer.go:280] Caches are synced for node config
*
* ==> kube-proxy [9999f58d2765] <==
* E0331 18:04:20.538153 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-939189": dial tcp 192.168.39.142:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.142:56890->192.168.39.142:8443: read: connection reset by peer
E0331 18:04:21.665395 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-939189": dial tcp 192.168.39.142:8443: connect: connection refused
E0331 18:04:23.920058 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-939189": dial tcp 192.168.39.142:8443: connect: connection refused
*
* ==> kube-scheduler [5d40b2ef4a86] <==
* I0331 18:04:36.274158 1 serving.go:348] Generated self-signed cert in-memory
W0331 18:04:39.233042 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0331 18:04:39.233351 1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0331 18:04:39.233637 1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
W0331 18:04:39.233672 1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0331 18:04:39.306413 1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.3"
I0331 18:04:39.306462 1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0331 18:04:39.308017 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0331 18:04:39.308563 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0331 18:04:39.308610 1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0331 18:04:39.308627 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0331 18:04:39.409801 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [b4599f5bff86] <==
* E0331 18:04:24.036070 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.142:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
W0331 18:04:24.450493 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.39.142:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
E0331 18:04:24.450560 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.142:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
W0331 18:04:24.681951 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
E0331 18:04:24.682039 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
W0331 18:04:24.877656 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
E0331 18:04:24.878016 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
W0331 18:04:24.900986 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.39.142:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
E0331 18:04:24.901338 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.142:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
W0331 18:04:24.987726 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
E0331 18:04:24.988045 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.142:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
W0331 18:04:25.024394 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.142:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
E0331 18:04:25.024478 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.142:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
W0331 18:04:25.132338 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.39.142:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
E0331 18:04:25.132589 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.142:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
W0331 18:04:26.745186 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.39.142:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
E0331 18:04:26.745273 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.142:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
W0331 18:04:26.909186 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.142:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
E0331 18:04:26.909259 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.142:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
W0331 18:04:27.588118 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.39.142:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
E0331 18:04:27.588180 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.142:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.142:8443: connect: connection refused
I0331 18:04:27.668688 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0331 18:04:27.668780 1 shared_informer.go:276] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0331 18:04:27.668791 1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
E0331 18:04:27.669106 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kubelet <==
* -- Journal begins at Fri 2023-03-31 18:02:19 UTC, ends at Fri 2023-03-31 18:05:00 UTC. --
Mar 31 18:04:35 pause-939189 kubelet[7167]: I0331 18:04:35.060753 7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbeb3c050b5e21453f641a818794f61-kubeconfig\") pod \"kube-controller-manager-pause-939189\" (UID: \"5bbeb3c050b5e21453f641a818794f61\") " pod="kube-system/kube-controller-manager-pause-939189"
Mar 31 18:04:35 pause-939189 kubelet[7167]: I0331 18:04:35.060806 7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbeb3c050b5e21453f641a818794f61-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-939189\" (UID: \"5bbeb3c050b5e21453f641a818794f61\") " pod="kube-system/kube-controller-manager-pause-939189"
Mar 31 18:04:35 pause-939189 kubelet[7167]: I0331 18:04:35.173548 7167 scope.go:115] "RemoveContainer" containerID="a0ad0a35a3e08720ef402cc44066aa6415d3380188ccf061278936b018f9164f"
Mar 31 18:04:35 pause-939189 kubelet[7167]: I0331 18:04:35.206303 7167 scope.go:115] "RemoveContainer" containerID="b4599f5bff86da254627b8fa420dbfa886e737fe4bf8140cd8ac5ec3f882a89e"
Mar 31 18:04:35 pause-939189 kubelet[7167]: I0331 18:04:35.871491 7167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5b35d44675c82be44631616cd6f0a52aa1dc911e88776342deacc611d359e35"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.403200 7167 kubelet_node_status.go:108] "Node was previously registered" node="pause-939189"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.403314 7167 kubelet_node_status.go:73] "Successfully registered node" node="pause-939189"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.406119 7167 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.407529 7167 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.534534 7167 apiserver.go:52] "Watching apiserver"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.537600 7167 topology_manager.go:210] "Topology Admit Handler"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.537920 7167 topology_manager.go:210] "Topology Admit Handler"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.561329 7167 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.592448 7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd3378f4-948b-4bec-abd3-ea9dc35d3259-xtables-lock\") pod \"kube-proxy-jg8p6\" (UID: \"dd3378f4-948b-4bec-abd3-ea9dc35d3259\") " pod="kube-system/kube-proxy-jg8p6"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.592793 7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e78e1f9-1a39-4c02-a4e9-51e5b268d077-config-volume\") pod \"coredns-787d4945fb-hcrtc\" (UID: \"1e78e1f9-1a39-4c02-a4e9-51e5b268d077\") " pod="kube-system/coredns-787d4945fb-hcrtc"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.593000 7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxlhf\" (UniqueName: \"kubernetes.io/projected/dd3378f4-948b-4bec-abd3-ea9dc35d3259-kube-api-access-nxlhf\") pod \"kube-proxy-jg8p6\" (UID: \"dd3378f4-948b-4bec-abd3-ea9dc35d3259\") " pod="kube-system/kube-proxy-jg8p6"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.593182 7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd3378f4-948b-4bec-abd3-ea9dc35d3259-lib-modules\") pod \"kube-proxy-jg8p6\" (UID: \"dd3378f4-948b-4bec-abd3-ea9dc35d3259\") " pod="kube-system/kube-proxy-jg8p6"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.593344 7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n26cp\" (UniqueName: \"kubernetes.io/projected/1e78e1f9-1a39-4c02-a4e9-51e5b268d077-kube-api-access-n26cp\") pod \"coredns-787d4945fb-hcrtc\" (UID: \"1e78e1f9-1a39-4c02-a4e9-51e5b268d077\") " pod="kube-system/coredns-787d4945fb-hcrtc"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.593511 7167 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dd3378f4-948b-4bec-abd3-ea9dc35d3259-kube-proxy\") pod \"kube-proxy-jg8p6\" (UID: \"dd3378f4-948b-4bec-abd3-ea9dc35d3259\") " pod="kube-system/kube-proxy-jg8p6"
Mar 31 18:04:39 pause-939189 kubelet[7167]: I0331 18:04:39.593631 7167 reconciler.go:41] "Reconciler: start to sync state"
Mar 31 18:04:40 pause-939189 kubelet[7167]: I0331 18:04:40.739124 7167 scope.go:115] "RemoveContainer" containerID="9999f58d276569aa698d96721d17b94fa850bf4239d5df11ce622ad76d4c9c20"
Mar 31 18:04:40 pause-939189 kubelet[7167]: I0331 18:04:40.900279 7167 request.go:690] Waited for 1.195299342s due to client-side throttling, not priority and fairness, request: PATCH:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-939189/status
Mar 31 18:04:41 pause-939189 kubelet[7167]: I0331 18:04:41.825587 7167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11bb612576207ce6f9fdbde8dfa7f6235a96c8d3be559f2e51d8d4b173aa4b51"
Mar 31 18:04:43 pause-939189 kubelet[7167]: I0331 18:04:43.869081 7167 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
Mar 31 18:04:45 pause-939189 kubelet[7167]: I0331 18:04:45.920720 7167 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-939189 -n pause-939189
helpers_test.go:261: (dbg) Run: kubectl --context pause-939189 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (95.40s)