=== RUN TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run: out/minikube-linux-amd64 start -p pause-763583 --alsologtostderr -v=1 --driver=kvm2
E0307 18:41:54.184792 11114 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/skaffold-056348/client.crt: no such file or directory
E0307 18:42:00.394661 11114 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/addons-280841/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-763583 --alsologtostderr -v=1 --driver=kvm2 : (1m32.949686337s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got:
-- stdout --
* [pause-763583] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=15985
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/15985-4059/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-4059/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on existing profile
* Starting control plane node pause-763583 in cluster pause-763583
* Updating the running kvm2 "pause-763583" VM ...
* Preparing Kubernetes v1.26.2 on Docker 20.10.23 ...
* Configuring bridge CNI (Container Networking Interface) ...
* Enabled addons:
* Verifying Kubernetes components...
* Done! kubectl is now configured to use "pause-763583" cluster and "default" namespace by default
-- /stdout --
** stderr **
I0307 18:41:51.666281 35395 out.go:296] Setting OutFile to fd 1 ...
I0307 18:41:51.666402 35395 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0307 18:41:51.666411 35395 out.go:309] Setting ErrFile to fd 2...
I0307 18:41:51.666416 35395 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0307 18:41:51.666510 35395 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15985-4059/.minikube/bin
I0307 18:41:51.667080 35395 out.go:303] Setting JSON to false
I0307 18:41:51.668030 35395 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5064,"bootTime":1678209448,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0307 18:41:51.668205 35395 start.go:135] virtualization: kvm guest
I0307 18:41:51.671277 35395 out.go:177] * [pause-763583] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0307 18:41:51.673328 35395 out.go:177] - MINIKUBE_LOCATION=15985
I0307 18:41:51.673275 35395 notify.go:220] Checking for updates...
I0307 18:41:51.674904 35395 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0307 18:41:51.676605 35395 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15985-4059/kubeconfig
I0307 18:41:51.678017 35395 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-4059/.minikube
I0307 18:41:51.679458 35395 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0307 18:41:51.680880 35395 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0307 18:41:51.682821 35395 config.go:182] Loaded profile config "pause-763583": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 18:41:51.683237 35395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0307 18:41:51.683296 35395 main.go:141] libmachine: Launching plugin server for driver kvm2
I0307 18:41:51.697429 35395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43733
I0307 18:41:51.697858 35395 main.go:141] libmachine: () Calling .GetVersion
I0307 18:41:51.698371 35395 main.go:141] libmachine: Using API Version 1
I0307 18:41:51.698393 35395 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 18:41:51.698777 35395 main.go:141] libmachine: () Calling .GetMachineName
I0307 18:41:51.698963 35395 main.go:141] libmachine: (pause-763583) Calling .DriverName
I0307 18:41:51.699144 35395 driver.go:365] Setting default libvirt URI to qemu:///system
I0307 18:41:51.699545 35395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0307 18:41:51.699585 35395 main.go:141] libmachine: Launching plugin server for driver kvm2
I0307 18:41:51.713457 35395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46467
I0307 18:41:51.713815 35395 main.go:141] libmachine: () Calling .GetVersion
I0307 18:41:51.714238 35395 main.go:141] libmachine: Using API Version 1
I0307 18:41:51.714254 35395 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 18:41:51.714558 35395 main.go:141] libmachine: () Calling .GetMachineName
I0307 18:41:51.714720 35395 main.go:141] libmachine: (pause-763583) Calling .DriverName
I0307 18:41:51.750474 35395 out.go:177] * Using the kvm2 driver based on existing profile
I0307 18:41:51.751983 35395 start.go:296] selected driver: kvm2
I0307 18:41:51.752001 35395 start.go:857] validating driver "kvm2" against &{Name:pause-763583 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.26.2 ClusterName:pause-763583 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.47 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security
-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0307 18:41:51.752134 35395 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0307 18:41:51.752378 35395 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 18:41:51.752446 35395 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15985-4059/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0307 18:41:51.766585 35395 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
I0307 18:41:51.767286 35395 cni.go:84] Creating CNI manager for ""
I0307 18:41:51.767306 35395 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0307 18:41:51.767318 35395 start_flags.go:319] config:
{Name:pause-763583 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:pause-763583 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.47 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-c
reds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0307 18:41:51.767481 35395 iso.go:125] acquiring lock: {Name:mkf75c329a61b8189e3f3e4bd561d5125dafa20c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 18:41:51.769459 35395 out.go:177] * Starting control plane node pause-763583 in cluster pause-763583
I0307 18:41:51.770876 35395 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0307 18:41:51.770931 35395 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15985-4059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4
I0307 18:41:51.770949 35395 cache.go:57] Caching tarball of preloaded images
I0307 18:41:51.771043 35395 preload.go:174] Found /home/jenkins/minikube-integration/15985-4059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0307 18:41:51.771058 35395 cache.go:60] Finished verifying existence of preloaded tar for v1.26.2 on docker
I0307 18:41:51.771197 35395 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/pause-763583/config.json ...
I0307 18:41:51.771424 35395 cache.go:193] Successfully downloaded all kic artifacts
I0307 18:41:51.771469 35395 start.go:364] acquiring machines lock for pause-763583: {Name:mkdc620a3744ce597744f8ea42dba23b3f56e106 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0307 18:42:07.012368 35395 start.go:368] acquired machines lock for "pause-763583" in 15.240812726s
I0307 18:42:07.012432 35395 start.go:96] Skipping create...Using existing machine configuration
I0307 18:42:07.012443 35395 fix.go:55] fixHost starting:
I0307 18:42:07.012876 35395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0307 18:42:07.012968 35395 main.go:141] libmachine: Launching plugin server for driver kvm2
I0307 18:42:07.030337 35395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37663
I0307 18:42:07.030713 35395 main.go:141] libmachine: () Calling .GetVersion
I0307 18:42:07.031266 35395 main.go:141] libmachine: Using API Version 1
I0307 18:42:07.031287 35395 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 18:42:07.031732 35395 main.go:141] libmachine: () Calling .GetMachineName
I0307 18:42:07.031985 35395 main.go:141] libmachine: (pause-763583) Calling .DriverName
I0307 18:42:07.032150 35395 main.go:141] libmachine: (pause-763583) Calling .GetState
I0307 18:42:07.033865 35395 fix.go:103] recreateIfNeeded on pause-763583: state=Running err=<nil>
W0307 18:42:07.033912 35395 fix.go:129] unexpected machine state, will restart: <nil>
I0307 18:42:07.036399 35395 out.go:177] * Updating the running kvm2 "pause-763583" VM ...
I0307 18:42:07.037827 35395 machine.go:88] provisioning docker machine ...
I0307 18:42:07.037852 35395 main.go:141] libmachine: (pause-763583) Calling .DriverName
I0307 18:42:07.038067 35395 main.go:141] libmachine: (pause-763583) Calling .GetMachineName
I0307 18:42:07.038242 35395 buildroot.go:166] provisioning hostname "pause-763583"
I0307 18:42:07.038262 35395 main.go:141] libmachine: (pause-763583) Calling .GetMachineName
I0307 18:42:07.038403 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHHostname
I0307 18:42:07.041062 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:07.041421 35395 main.go:141] libmachine: (pause-763583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7e:f8", ip: ""} in network mk-pause-763583: {Iface:virbr3 ExpiryTime:2023-03-07 19:40:49 +0000 UTC Type:0 Mac:52:54:00:7d:7e:f8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:pause-763583 Clientid:01:52:54:00:7d:7e:f8}
I0307 18:42:07.041448 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined IP address 192.168.61.47 and MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:07.041596 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHPort
I0307 18:42:07.041758 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHKeyPath
I0307 18:42:07.041908 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHKeyPath
I0307 18:42:07.042066 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHUsername
I0307 18:42:07.042236 35395 main.go:141] libmachine: Using SSH client type: native
I0307 18:42:07.042675 35395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil> [] 0s} 192.168.61.47 22 <nil> <nil>}
I0307 18:42:07.042695 35395 main.go:141] libmachine: About to run SSH command:
sudo hostname pause-763583 && echo "pause-763583" | sudo tee /etc/hostname
I0307 18:42:07.180674 35395 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-763583
I0307 18:42:07.180705 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHHostname
I0307 18:42:07.183579 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:07.184028 35395 main.go:141] libmachine: (pause-763583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7e:f8", ip: ""} in network mk-pause-763583: {Iface:virbr3 ExpiryTime:2023-03-07 19:40:49 +0000 UTC Type:0 Mac:52:54:00:7d:7e:f8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:pause-763583 Clientid:01:52:54:00:7d:7e:f8}
I0307 18:42:07.184059 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined IP address 192.168.61.47 and MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:07.184274 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHPort
I0307 18:42:07.184511 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHKeyPath
I0307 18:42:07.184707 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHKeyPath
I0307 18:42:07.184889 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHUsername
I0307 18:42:07.185115 35395 main.go:141] libmachine: Using SSH client type: native
I0307 18:42:07.185586 35395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil> [] 0s} 192.168.61.47 22 <nil> <nil>}
I0307 18:42:07.185604 35395 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\spause-763583' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-763583/g' /etc/hosts;
else
echo '127.0.1.1 pause-763583' | sudo tee -a /etc/hosts;
fi
fi
I0307 18:42:07.311535 35395 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0307 18:42:07.311564 35395 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15985-4059/.minikube CaCertPath:/home/jenkins/minikube-integration/15985-4059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15985-4059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15985-4059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15985-4059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15985-4059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15985-4059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15985-4059/.minikube}
I0307 18:42:07.311582 35395 buildroot.go:174] setting up certificates
I0307 18:42:07.311589 35395 provision.go:83] configureAuth start
I0307 18:42:07.311597 35395 main.go:141] libmachine: (pause-763583) Calling .GetMachineName
I0307 18:42:07.311969 35395 main.go:141] libmachine: (pause-763583) Calling .GetIP
I0307 18:42:07.315105 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:07.315571 35395 main.go:141] libmachine: (pause-763583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7e:f8", ip: ""} in network mk-pause-763583: {Iface:virbr3 ExpiryTime:2023-03-07 19:40:49 +0000 UTC Type:0 Mac:52:54:00:7d:7e:f8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:pause-763583 Clientid:01:52:54:00:7d:7e:f8}
I0307 18:42:07.315605 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined IP address 192.168.61.47 and MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:07.315816 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHHostname
I0307 18:42:07.317729 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:07.318020 35395 main.go:141] libmachine: (pause-763583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7e:f8", ip: ""} in network mk-pause-763583: {Iface:virbr3 ExpiryTime:2023-03-07 19:40:49 +0000 UTC Type:0 Mac:52:54:00:7d:7e:f8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:pause-763583 Clientid:01:52:54:00:7d:7e:f8}
I0307 18:42:07.318051 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined IP address 192.168.61.47 and MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:07.318191 35395 provision.go:138] copyHostCerts
I0307 18:42:07.318260 35395 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-4059/.minikube/ca.pem, removing ...
I0307 18:42:07.318273 35395 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-4059/.minikube/ca.pem
I0307 18:42:07.318349 35395 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-4059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15985-4059/.minikube/ca.pem (1078 bytes)
I0307 18:42:07.318463 35395 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-4059/.minikube/cert.pem, removing ...
I0307 18:42:07.318477 35395 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-4059/.minikube/cert.pem
I0307 18:42:07.318508 35395 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-4059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15985-4059/.minikube/cert.pem (1123 bytes)
I0307 18:42:07.318577 35395 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-4059/.minikube/key.pem, removing ...
I0307 18:42:07.318587 35395 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-4059/.minikube/key.pem
I0307 18:42:07.318619 35395 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-4059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15985-4059/.minikube/key.pem (1675 bytes)
I0307 18:42:07.318686 35395 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15985-4059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15985-4059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15985-4059/.minikube/certs/ca-key.pem org=jenkins.pause-763583 san=[192.168.61.47 192.168.61.47 localhost 127.0.0.1 minikube pause-763583]
I0307 18:42:07.392230 35395 provision.go:172] copyRemoteCerts
I0307 18:42:07.392291 35395 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0307 18:42:07.392312 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHHostname
I0307 18:42:07.395234 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:07.395670 35395 main.go:141] libmachine: (pause-763583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7e:f8", ip: ""} in network mk-pause-763583: {Iface:virbr3 ExpiryTime:2023-03-07 19:40:49 +0000 UTC Type:0 Mac:52:54:00:7d:7e:f8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:pause-763583 Clientid:01:52:54:00:7d:7e:f8}
I0307 18:42:07.395709 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined IP address 192.168.61.47 and MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:07.395926 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHPort
I0307 18:42:07.396113 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHKeyPath
I0307 18:42:07.396277 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHUsername
I0307 18:42:07.396430 35395 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4059/.minikube/machines/pause-763583/id_rsa Username:docker}
I0307 18:42:07.492752 35395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0307 18:42:07.515699 35395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
I0307 18:42:07.543624 35395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0307 18:42:07.568653 35395 provision.go:86] duration metric: configureAuth took 257.050071ms
I0307 18:42:07.568689 35395 buildroot.go:189] setting minikube options for container-runtime
I0307 18:42:07.568993 35395 config.go:182] Loaded profile config "pause-763583": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 18:42:07.569027 35395 main.go:141] libmachine: (pause-763583) Calling .DriverName
I0307 18:42:07.569356 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHHostname
I0307 18:42:07.572671 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:07.573051 35395 main.go:141] libmachine: (pause-763583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7e:f8", ip: ""} in network mk-pause-763583: {Iface:virbr3 ExpiryTime:2023-03-07 19:40:49 +0000 UTC Type:0 Mac:52:54:00:7d:7e:f8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:pause-763583 Clientid:01:52:54:00:7d:7e:f8}
I0307 18:42:07.573089 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined IP address 192.168.61.47 and MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:07.573165 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHPort
I0307 18:42:07.573376 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHKeyPath
I0307 18:42:07.573528 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHKeyPath
I0307 18:42:07.573660 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHUsername
I0307 18:42:07.573787 35395 main.go:141] libmachine: Using SSH client type: native
I0307 18:42:07.574247 35395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil> [] 0s} 192.168.61.47 22 <nil> <nil>}
I0307 18:42:07.574261 35395 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0307 18:42:07.698459 35395 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0307 18:42:07.698494 35395 buildroot.go:70] root file system type: tmpfs
I0307 18:42:07.698614 35395 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0307 18:42:07.698634 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHHostname
I0307 18:42:07.701456 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:07.701928 35395 main.go:141] libmachine: (pause-763583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7e:f8", ip: ""} in network mk-pause-763583: {Iface:virbr3 ExpiryTime:2023-03-07 19:40:49 +0000 UTC Type:0 Mac:52:54:00:7d:7e:f8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:pause-763583 Clientid:01:52:54:00:7d:7e:f8}
I0307 18:42:07.701959 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined IP address 192.168.61.47 and MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:07.702165 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHPort
I0307 18:42:07.702371 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHKeyPath
I0307 18:42:07.702534 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHKeyPath
I0307 18:42:07.702689 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHUsername
I0307 18:42:07.702869 35395 main.go:141] libmachine: Using SSH client type: native
I0307 18:42:07.703476 35395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil> [] 0s} 192.168.61.47 22 <nil> <nil>}
I0307 18:42:07.703577 35395 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0307 18:42:07.844585 35395 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0307 18:42:07.844620 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHHostname
I0307 18:42:07.848074 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:07.848542 35395 main.go:141] libmachine: (pause-763583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7e:f8", ip: ""} in network mk-pause-763583: {Iface:virbr3 ExpiryTime:2023-03-07 19:40:49 +0000 UTC Type:0 Mac:52:54:00:7d:7e:f8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:pause-763583 Clientid:01:52:54:00:7d:7e:f8}
I0307 18:42:07.848566 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined IP address 192.168.61.47 and MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:07.848854 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHPort
I0307 18:42:07.849062 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHKeyPath
I0307 18:42:07.849249 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHKeyPath
I0307 18:42:07.849449 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHUsername
I0307 18:42:07.849674 35395 main.go:141] libmachine: Using SSH client type: native
I0307 18:42:07.850265 35395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil> [] 0s} 192.168.61.47 22 <nil> <nil>}
I0307 18:42:07.850294 35395 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0307 18:42:07.985359 35395 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0307 18:42:07.985389 35395 machine.go:91] provisioned docker machine in 947.544346ms
I0307 18:42:07.985400 35395 start.go:300] post-start starting for "pause-763583" (driver="kvm2")
I0307 18:42:07.985409 35395 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0307 18:42:07.985444 35395 main.go:141] libmachine: (pause-763583) Calling .DriverName
I0307 18:42:07.985749 35395 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0307 18:42:07.985780 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHHostname
I0307 18:42:07.988510 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:07.988902 35395 main.go:141] libmachine: (pause-763583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7e:f8", ip: ""} in network mk-pause-763583: {Iface:virbr3 ExpiryTime:2023-03-07 19:40:49 +0000 UTC Type:0 Mac:52:54:00:7d:7e:f8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:pause-763583 Clientid:01:52:54:00:7d:7e:f8}
I0307 18:42:07.988942 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined IP address 192.168.61.47 and MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:07.989131 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHPort
I0307 18:42:07.989336 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHKeyPath
I0307 18:42:07.989499 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHUsername
I0307 18:42:07.989623 35395 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4059/.minikube/machines/pause-763583/id_rsa Username:docker}
I0307 18:42:08.082214 35395 ssh_runner.go:195] Run: cat /etc/os-release
I0307 18:42:08.086477 35395 info.go:137] Remote host: Buildroot 2021.02.12
I0307 18:42:08.086504 35395 filesync.go:126] Scanning /home/jenkins/minikube-integration/15985-4059/.minikube/addons for local assets ...
I0307 18:42:08.086584 35395 filesync.go:126] Scanning /home/jenkins/minikube-integration/15985-4059/.minikube/files for local assets ...
I0307 18:42:08.086685 35395 filesync.go:149] local asset: /home/jenkins/minikube-integration/15985-4059/.minikube/files/etc/ssl/certs/111142.pem -> 111142.pem in /etc/ssl/certs
I0307 18:42:08.086802 35395 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0307 18:42:08.094777 35395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/files/etc/ssl/certs/111142.pem --> /etc/ssl/certs/111142.pem (1708 bytes)
I0307 18:42:08.118059 35395 start.go:303] post-start completed in 132.643739ms
I0307 18:42:08.118088 35395 fix.go:57] fixHost completed within 1.105644399s
I0307 18:42:08.118111 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHHostname
I0307 18:42:08.121337 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:08.121735 35395 main.go:141] libmachine: (pause-763583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7e:f8", ip: ""} in network mk-pause-763583: {Iface:virbr3 ExpiryTime:2023-03-07 19:40:49 +0000 UTC Type:0 Mac:52:54:00:7d:7e:f8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:pause-763583 Clientid:01:52:54:00:7d:7e:f8}
I0307 18:42:08.121766 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined IP address 192.168.61.47 and MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:08.122017 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHPort
I0307 18:42:08.122231 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHKeyPath
I0307 18:42:08.122405 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHKeyPath
I0307 18:42:08.122579 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHUsername
I0307 18:42:08.122773 35395 main.go:141] libmachine: Using SSH client type: native
I0307 18:42:08.123175 35395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil> [] 0s} 192.168.61.47 22 <nil> <nil>}
I0307 18:42:08.123185 35395 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0307 18:42:08.249155 35395 main.go:141] libmachine: SSH cmd err, output: <nil>: 1678214528.242345090
I0307 18:42:08.249191 35395 fix.go:207] guest clock: 1678214528.242345090
I0307 18:42:08.249201 35395 fix.go:220] Guest: 2023-03-07 18:42:08.24234509 +0000 UTC Remote: 2023-03-07 18:42:08.118092792 +0000 UTC m=+16.492238133 (delta=124.252298ms)
I0307 18:42:08.249226 35395 fix.go:191] guest clock delta is within tolerance: 124.252298ms
I0307 18:42:08.249233 35395 start.go:83] releasing machines lock for "pause-763583", held for 1.23682766s
I0307 18:42:08.249258 35395 main.go:141] libmachine: (pause-763583) Calling .DriverName
I0307 18:42:08.249511 35395 main.go:141] libmachine: (pause-763583) Calling .GetIP
I0307 18:42:08.252325 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:08.252695 35395 main.go:141] libmachine: (pause-763583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7e:f8", ip: ""} in network mk-pause-763583: {Iface:virbr3 ExpiryTime:2023-03-07 19:40:49 +0000 UTC Type:0 Mac:52:54:00:7d:7e:f8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:pause-763583 Clientid:01:52:54:00:7d:7e:f8}
I0307 18:42:08.252715 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined IP address 192.168.61.47 and MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:08.253012 35395 main.go:141] libmachine: (pause-763583) Calling .DriverName
I0307 18:42:08.253542 35395 main.go:141] libmachine: (pause-763583) Calling .DriverName
I0307 18:42:08.253710 35395 main.go:141] libmachine: (pause-763583) Calling .DriverName
I0307 18:42:08.253815 35395 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0307 18:42:08.253852 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHHostname
I0307 18:42:08.253926 35395 ssh_runner.go:195] Run: cat /version.json
I0307 18:42:08.253945 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHHostname
I0307 18:42:08.256578 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:08.256855 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:08.257006 35395 main.go:141] libmachine: (pause-763583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7e:f8", ip: ""} in network mk-pause-763583: {Iface:virbr3 ExpiryTime:2023-03-07 19:40:49 +0000 UTC Type:0 Mac:52:54:00:7d:7e:f8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:pause-763583 Clientid:01:52:54:00:7d:7e:f8}
I0307 18:42:08.257040 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined IP address 192.168.61.47 and MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:08.257255 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHPort
I0307 18:42:08.257356 35395 main.go:141] libmachine: (pause-763583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7e:f8", ip: ""} in network mk-pause-763583: {Iface:virbr3 ExpiryTime:2023-03-07 19:40:49 +0000 UTC Type:0 Mac:52:54:00:7d:7e:f8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:pause-763583 Clientid:01:52:54:00:7d:7e:f8}
I0307 18:42:08.257389 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined IP address 192.168.61.47 and MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:08.257429 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHKeyPath
I0307 18:42:08.257534 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHPort
I0307 18:42:08.257609 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHUsername
I0307 18:42:08.257684 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHKeyPath
I0307 18:42:08.257753 35395 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4059/.minikube/machines/pause-763583/id_rsa Username:docker}
I0307 18:42:08.257906 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHUsername
I0307 18:42:08.258033 35395 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4059/.minikube/machines/pause-763583/id_rsa Username:docker}
I0307 18:42:08.377326 35395 ssh_runner.go:195] Run: systemctl --version
I0307 18:42:08.382915 35395 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0307 18:42:08.388492 35395 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0307 18:42:08.388542 35395 ssh_runner.go:195] Run: which cri-dockerd
I0307 18:42:08.392112 35395 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0307 18:42:08.400982 35395 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0307 18:42:08.416697 35395 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0307 18:42:08.424906 35395 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0307 18:42:08.424937 35395 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0307 18:42:08.425041 35395 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0307 18:42:08.454681 35395 docker.go:630] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0307 18:42:08.454709 35395 docker.go:560] Images already preloaded, skipping extraction
I0307 18:42:08.454729 35395 start.go:485] detecting cgroup driver to use...
I0307 18:42:08.454848 35395 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0307 18:42:08.472190 35395 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0307 18:42:08.482529 35395 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0307 18:42:08.494124 35395 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0307 18:42:08.494206 35395 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0307 18:42:08.504357 35395 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0307 18:42:08.514277 35395 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0307 18:42:08.524613 35395 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0307 18:42:08.534431 35395 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0307 18:42:08.544160 35395 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0307 18:42:08.554682 35395 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0307 18:42:08.563705 35395 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0307 18:42:08.572342 35395 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 18:42:08.724791 35395 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0307 18:42:08.748201 35395 start.go:485] detecting cgroup driver to use...
I0307 18:42:08.748295 35395 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0307 18:42:08.761786 35395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0307 18:42:08.773897 35395 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0307 18:42:08.794405 35395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0307 18:42:08.808637 35395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0307 18:42:08.822348 35395 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0307 18:42:08.839603 35395 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0307 18:42:08.982675 35395 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0307 18:42:09.118022 35395 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0307 18:42:09.118063 35395 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0307 18:42:09.135736 35395 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 18:42:09.298995 35395 ssh_runner.go:195] Run: sudo systemctl restart docker
I0307 18:42:21.991684 35395 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.692644209s)
I0307 18:42:21.991780 35395 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0307 18:42:22.131166 35395 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0307 18:42:22.299581 35395 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0307 18:42:22.480010 35395 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 18:42:22.714591 35395 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0307 18:42:22.745892 35395 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0307 18:42:22.745961 35395 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0307 18:42:22.771314 35395 start.go:553] Will wait 60s for crictl version
I0307 18:42:22.771372 35395 ssh_runner.go:195] Run: which crictl
I0307 18:42:22.778547 35395 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0307 18:42:22.956880 35395 start.go:569] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.23
RuntimeApiVersion: v1alpha2
I0307 18:42:22.956960 35395 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0307 18:42:23.023483 35395 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0307 18:42:23.066817 35395 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 20.10.23 ...
I0307 18:42:23.066861 35395 main.go:141] libmachine: (pause-763583) Calling .GetIP
I0307 18:42:23.070196 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:23.070591 35395 main.go:141] libmachine: (pause-763583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7e:f8", ip: ""} in network mk-pause-763583: {Iface:virbr3 ExpiryTime:2023-03-07 19:40:49 +0000 UTC Type:0 Mac:52:54:00:7d:7e:f8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:pause-763583 Clientid:01:52:54:00:7d:7e:f8}
I0307 18:42:23.070617 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined IP address 192.168.61.47 and MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:42:23.070861 35395 ssh_runner.go:195] Run: grep 192.168.61.1 host.minikube.internal$ /etc/hosts
I0307 18:42:23.075204 35395 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0307 18:42:23.075282 35395 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0307 18:42:23.101144 35395 docker.go:630] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0307 18:42:23.101168 35395 docker.go:560] Images already preloaded, skipping extraction
I0307 18:42:23.101243 35395 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0307 18:42:23.128056 35395 docker.go:630] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0307 18:42:23.128080 35395 cache_images.go:84] Images are preloaded, skipping loading
I0307 18:42:23.128152 35395 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0307 18:42:23.162919 35395 cni.go:84] Creating CNI manager for ""
I0307 18:42:23.162949 35395 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0307 18:42:23.162960 35395 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0307 18:42:23.162982 35395 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.47 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-763583 NodeName:pause-763583 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0307 18:42:23.163139 35395 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.61.47
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "pause-763583"
kubeletExtraArgs:
node-ip: 192.168.61.47
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.61.47"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0307 18:42:23.163237 35395 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-763583 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.47
[Install]
config:
{KubernetesVersion:v1.26.2 ClusterName:pause-763583 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0307 18:42:23.163296 35395 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
I0307 18:42:23.173671 35395 binaries.go:44] Found k8s binaries, skipping transfer
I0307 18:42:23.173735 35395 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0307 18:42:23.183124 35395 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (445 bytes)
I0307 18:42:23.199456 35395 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0307 18:42:23.216411 35395 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2091 bytes)
I0307 18:42:23.232892 35395 ssh_runner.go:195] Run: grep 192.168.61.47 control-plane.minikube.internal$ /etc/hosts
I0307 18:42:23.236916 35395 certs.go:56] Setting up /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/pause-763583 for IP: 192.168.61.47
I0307 18:42:23.236952 35395 certs.go:186] acquiring lock for shared ca certs: {Name:mk09f52d1213ecfb949f8e2d1f9b4b7cd7194c22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:42:23.237111 35395 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15985-4059/.minikube/ca.key
I0307 18:42:23.237174 35395 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15985-4059/.minikube/proxy-client-ca.key
I0307 18:42:23.237278 35395 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/pause-763583/client.key
I0307 18:42:23.237366 35395 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/pause-763583/apiserver.key.85e7fa4e
I0307 18:42:23.237428 35395 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/pause-763583/proxy-client.key
I0307 18:42:23.237581 35395 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4059/.minikube/certs/home/jenkins/minikube-integration/15985-4059/.minikube/certs/11114.pem (1338 bytes)
W0307 18:42:23.237620 35395 certs.go:397] ignoring /home/jenkins/minikube-integration/15985-4059/.minikube/certs/home/jenkins/minikube-integration/15985-4059/.minikube/certs/11114_empty.pem, impossibly tiny 0 bytes
I0307 18:42:23.237634 35395 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4059/.minikube/certs/home/jenkins/minikube-integration/15985-4059/.minikube/certs/ca-key.pem (1675 bytes)
I0307 18:42:23.237668 35395 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4059/.minikube/certs/home/jenkins/minikube-integration/15985-4059/.minikube/certs/ca.pem (1078 bytes)
I0307 18:42:23.237700 35395 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4059/.minikube/certs/home/jenkins/minikube-integration/15985-4059/.minikube/certs/cert.pem (1123 bytes)
I0307 18:42:23.237729 35395 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4059/.minikube/certs/home/jenkins/minikube-integration/15985-4059/.minikube/certs/key.pem (1675 bytes)
I0307 18:42:23.237784 35395 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4059/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15985-4059/.minikube/files/etc/ssl/certs/111142.pem (1708 bytes)
I0307 18:42:23.238307 35395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/pause-763583/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0307 18:42:23.262717 35395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/pause-763583/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0307 18:42:23.289475 35395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/pause-763583/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0307 18:42:23.313406 35395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/pause-763583/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0307 18:42:23.334719 35395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0307 18:42:23.356439 35395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0307 18:42:23.377666 35395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0307 18:42:23.400392 35395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0307 18:42:23.423074 35395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0307 18:42:23.449342 35395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/certs/11114.pem --> /usr/share/ca-certificates/11114.pem (1338 bytes)
I0307 18:42:23.470329 35395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/files/etc/ssl/certs/111142.pem --> /usr/share/ca-certificates/111142.pem (1708 bytes)
I0307 18:42:23.491893 35395 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0307 18:42:23.509745 35395 ssh_runner.go:195] Run: openssl version
I0307 18:42:23.516133 35395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0307 18:42:23.525995 35395 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0307 18:42:23.530503 35395 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 7 18:02 /usr/share/ca-certificates/minikubeCA.pem
I0307 18:42:23.530549 35395 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0307 18:42:23.535762 35395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0307 18:42:23.544588 35395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11114.pem && ln -fs /usr/share/ca-certificates/11114.pem /etc/ssl/certs/11114.pem"
I0307 18:42:23.554453 35395 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11114.pem
I0307 18:42:23.558568 35395 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 7 18:06 /usr/share/ca-certificates/11114.pem
I0307 18:42:23.558612 35395 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11114.pem
I0307 18:42:23.563677 35395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11114.pem /etc/ssl/certs/51391683.0"
I0307 18:42:23.571794 35395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111142.pem && ln -fs /usr/share/ca-certificates/111142.pem /etc/ssl/certs/111142.pem"
I0307 18:42:23.581268 35395 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111142.pem
I0307 18:42:23.585510 35395 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 7 18:06 /usr/share/ca-certificates/111142.pem
I0307 18:42:23.585563 35395 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111142.pem
I0307 18:42:23.591542 35395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111142.pem /etc/ssl/certs/3ec20f2e.0"
I0307 18:42:23.599800 35395 kubeadm.go:401] StartCluster: {Name:pause-763583 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
26.2 ClusterName:pause-763583 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.47 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false port
ainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0307 18:42:23.599973 35395 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0307 18:42:23.624003 35395 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0307 18:42:23.632690 35395 kubeadm.go:416] found existing configuration files, will attempt cluster restart
I0307 18:42:23.632711 35395 kubeadm.go:633] restartCluster start
I0307 18:42:23.632761 35395 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0307 18:42:23.641324 35395 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0307 18:42:23.641981 35395 kubeconfig.go:92] found "pause-763583" server: "https://192.168.61.47:8443"
I0307 18:42:23.643049 35395 kapi.go:59] client config for pause-763583: &rest.Config{Host:"https://192.168.61.47:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15985-4059/.minikube/profiles/pause-763583/client.crt", KeyFile:"/home/jenkins/minikube-integration/15985-4059/.minikube/profiles/pause-763583/client.key", CAFile:"/home/jenkins/minikube-integration/15985-4059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29a5480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0307 18:42:23.643817 35395 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0307 18:42:23.652409 35395 api_server.go:165] Checking apiserver status ...
I0307 18:42:23.652461 35395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:42:23.663392 35395 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:42:24.164125 35395 api_server.go:165] Checking apiserver status ...
I0307 18:42:24.164207 35395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:42:24.186033 35395 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:42:24.663601 35395 api_server.go:165] Checking apiserver status ...
I0307 18:42:24.663689 35395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:42:24.692897 35395 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:42:25.164436 35395 api_server.go:165] Checking apiserver status ...
I0307 18:42:25.164503 35395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:42:25.181017 35395 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5704/cgroup
I0307 18:42:25.195536 35395 api_server.go:181] apiserver freezer: "7:freezer:/kubepods/burstable/pod1c380661a097a426b4e5c4b08467a92f/323901da5efdccbdd772176a50c42b386fdaea2dc1ccd08b8a3ca5ce6d233e31"
I0307 18:42:25.195609 35395 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod1c380661a097a426b4e5c4b08467a92f/323901da5efdccbdd772176a50c42b386fdaea2dc1ccd08b8a3ca5ce6d233e31/freezer.state
I0307 18:42:25.207797 35395 api_server.go:203] freezer state: "THAWED"
I0307 18:42:25.207831 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:42:30.208502 35395 api_server.go:268] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:42:30.208567 35395 retry.go:31] will retry after 200.240444ms: state is "Stopped"
I0307 18:42:30.408943 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:42:35.409925 35395 api_server.go:268] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:42:35.409988 35395 retry.go:31] will retry after 289.370174ms: state is "Stopped"
I0307 18:42:35.699408 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:42:40.700655 35395 api_server.go:268] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:42:40.700719 35395 api_server.go:165] Checking apiserver status ...
I0307 18:42:40.700770 35395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:42:40.713699 35395 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5704/cgroup
I0307 18:42:40.721564 35395 api_server.go:181] apiserver freezer: "7:freezer:/kubepods/burstable/pod1c380661a097a426b4e5c4b08467a92f/323901da5efdccbdd772176a50c42b386fdaea2dc1ccd08b8a3ca5ce6d233e31"
I0307 18:42:40.721635 35395 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod1c380661a097a426b4e5c4b08467a92f/323901da5efdccbdd772176a50c42b386fdaea2dc1ccd08b8a3ca5ce6d233e31/freezer.state
I0307 18:42:40.730357 35395 api_server.go:203] freezer state: "THAWED"
I0307 18:42:40.730387 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:42:45.731574 35395 api_server.go:268] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:42:45.731614 35395 retry.go:31] will retry after 286.63804ms: state is "Stopped"
I0307 18:42:46.019198 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:42:46.218865 35395 api_server.go:268] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": read tcp 192.168.61.1:53282->192.168.61.47:8443: read: connection reset by peer
I0307 18:42:46.218919 35395 retry.go:31] will retry after 319.15688ms: state is "Stopped"
I0307 18:42:46.538370 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:42:46.538958 35395 api_server.go:268] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0307 18:42:46.539004 35395 retry.go:31] will retry after 304.203408ms: state is "Stopped"
I0307 18:42:46.843493 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:42:46.844047 35395 api_server.go:268] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0307 18:42:46.844093 35395 retry.go:31] will retry after 526.370381ms: state is "Stopped"
I0307 18:42:47.371157 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:42:47.371744 35395 api_server.go:268] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0307 18:42:47.371783 35395 retry.go:31] will retry after 562.618226ms: state is "Stopped"
I0307 18:42:47.935207 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:42:47.935878 35395 api_server.go:268] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0307 18:42:47.935942 35395 retry.go:31] will retry after 649.308225ms: state is "Stopped"
I0307 18:42:48.585720 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:42:48.586297 35395 api_server.go:268] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0307 18:42:48.586334 35395 retry.go:31] will retry after 922.490669ms: state is "Stopped"
I0307 18:42:49.509323 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:42:49.509933 35395 api_server.go:268] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0307 18:42:49.509968 35395 retry.go:31] will retry after 1.472651629s: state is "Stopped"
I0307 18:42:50.983550 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:42:50.984316 35395 api_server.go:268] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0307 18:42:50.984353 35395 retry.go:31] will retry after 1.155893064s: state is "Stopped"
I0307 18:42:52.140610 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:42:52.141192 35395 api_server.go:268] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0307 18:42:52.141239 35395 retry.go:31] will retry after 1.85690538s: state is "Stopped"
I0307 18:42:53.999147 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:42:53.999898 35395 api_server.go:268] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0307 18:42:53.999940 35395 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
I0307 18:42:53.999951 35395 kubeadm.go:1120] stopping kube-system containers ...
I0307 18:42:53.999998 35395 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0307 18:42:54.040053 35395 docker.go:456] Stopping containers: [94878c02897c 6e5a6ab1db37 ada79eb25afe 807b657d81c5 c6e309b2a141 1aa5eca48ed3 373afa3584ae 4b088e44e128 323901da5efd 764a0fa4725b 2bd1468d967e 0a00ef3151aa f9d1796ee3f0 b81f4ec7d260 c84f6577b84f b5b4e07d58df 5281523465d7 dfaf3b7ca0d1 5556e905b5fa 2e5eea8b121f 45b4271fe381 0668cd3a4224 8a62b9dea3ca]
I0307 18:42:54.040153 35395 ssh_runner.go:195] Run: docker stop 94878c02897c 6e5a6ab1db37 ada79eb25afe 807b657d81c5 c6e309b2a141 1aa5eca48ed3 373afa3584ae 4b088e44e128 323901da5efd 764a0fa4725b 2bd1468d967e 0a00ef3151aa f9d1796ee3f0 b81f4ec7d260 c84f6577b84f b5b4e07d58df 5281523465d7 dfaf3b7ca0d1 5556e905b5fa 2e5eea8b121f 45b4271fe381 0668cd3a4224 8a62b9dea3ca
I0307 18:42:59.232596 35395 ssh_runner.go:235] Completed: docker stop 94878c02897c 6e5a6ab1db37 ada79eb25afe 807b657d81c5 c6e309b2a141 1aa5eca48ed3 373afa3584ae 4b088e44e128 323901da5efd 764a0fa4725b 2bd1468d967e 0a00ef3151aa f9d1796ee3f0 b81f4ec7d260 c84f6577b84f b5b4e07d58df 5281523465d7 dfaf3b7ca0d1 5556e905b5fa 2e5eea8b121f 45b4271fe381 0668cd3a4224 8a62b9dea3ca: (5.192403298s)
I0307 18:42:59.232674 35395 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0307 18:42:59.290208 35395 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0307 18:42:59.300278 35395 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5639 Mar 7 18:41 /etc/kubernetes/admin.conf
-rw------- 1 root root 5657 Mar 7 18:41 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1987 Mar 7 18:41 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5605 Mar 7 18:41 /etc/kubernetes/scheduler.conf
I0307 18:42:59.300359 35395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0307 18:42:59.310617 35395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0307 18:42:59.321514 35395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0307 18:42:59.332557 35395 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0307 18:42:59.332627 35395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0307 18:42:59.342793 35395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0307 18:42:59.354052 35395 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0307 18:42:59.354123 35395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0307 18:42:59.366131 35395 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0307 18:42:59.379121 35395 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0307 18:42:59.379146 35395 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0307 18:42:59.504225 35395 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0307 18:43:00.380017 35395 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0307 18:43:00.631885 35395 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0307 18:43:00.716850 35395 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0307 18:43:00.815259 35395 api_server.go:51] waiting for apiserver process to appear ...
I0307 18:43:00.815345 35395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:43:00.844597 35395 api_server.go:71] duration metric: took 29.337411ms to wait for apiserver process to appear ...
I0307 18:43:00.844629 35395 api_server.go:87] waiting for apiserver healthz status ...
I0307 18:43:00.844642 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:43:05.844929 35395 api_server.go:268] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:43:06.345721 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:43:08.331961 35395 api_server.go:278] https://192.168.61.47:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0307 18:43:08.331991 35395 api_server.go:102] status: https://192.168.61.47:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0307 18:43:08.345106 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:43:08.419012 35395 api_server.go:278] https://192.168.61.47:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0307 18:43:08.419042 35395 api_server.go:102] status: https://192.168.61.47:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0307 18:43:08.845575 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:43:08.851385 35395 api_server.go:278] https://192.168.61.47:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0307 18:43:08.851415 35395 api_server.go:102] status: https://192.168.61.47:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0307 18:43:09.345055 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:43:09.357002 35395 api_server.go:278] https://192.168.61.47:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0307 18:43:09.357043 35395 api_server.go:102] status: https://192.168.61.47:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0307 18:43:09.845368 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:43:09.852655 35395 api_server.go:278] https://192.168.61.47:8443/healthz returned 200:
ok
I0307 18:43:09.864843 35395 api_server.go:140] control plane version: v1.26.2
I0307 18:43:09.864865 35395 api_server.go:130] duration metric: took 9.020231544s to wait for apiserver health ...
I0307 18:43:09.864873 35395 cni.go:84] Creating CNI manager for ""
I0307 18:43:09.864883 35395 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0307 18:43:09.866920 35395 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0307 18:43:09.868328 35395 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0307 18:43:09.880058 35395 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0307 18:43:09.905795 35395 system_pods.go:43] waiting for kube-system pods to appear ...
I0307 18:43:09.917255 35395 system_pods.go:59] 6 kube-system pods found
I0307 18:43:09.917295 35395 system_pods.go:61] "coredns-787d4945fb-n77tj" [e63f9141-89ed-4e4d-b1aa-86ad76074f81] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0307 18:43:09.917306 35395 system_pods.go:61] "etcd-pause-763583" [1443cb3f-e768-40cb-8959-b07a77a9b089] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0307 18:43:09.917314 35395 system_pods.go:61] "kube-apiserver-pause-763583" [21663669-cac7-48c2-9107-e69979cee194] Running
I0307 18:43:09.917324 35395 system_pods.go:61] "kube-controller-manager-pause-763583" [e00cf98f-3435-4f3c-b91c-c00a0b794b06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0307 18:43:09.917331 35395 system_pods.go:61] "kube-proxy-89rb5" [1976b181-14ab-48a2-bb64-2eb3b1ecf436] Running
I0307 18:43:09.917340 35395 system_pods.go:61] "kube-scheduler-pause-763583" [b495d084-0581-4e14-917f-e44a0bf077df] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0307 18:43:09.917348 35395 system_pods.go:74] duration metric: took 11.53195ms to wait for pod list to return data ...
I0307 18:43:09.917360 35395 node_conditions.go:102] verifying NodePressure condition ...
I0307 18:43:09.921137 35395 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 18:43:09.921167 35395 node_conditions.go:123] node cpu capacity is 2
I0307 18:43:09.921179 35395 node_conditions.go:105] duration metric: took 3.813699ms to run NodePressure ...
I0307 18:43:09.921196 35395 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0307 18:43:10.361537 35395 kubeadm.go:769] waiting for restarted kubelet to initialise ...
I0307 18:43:10.372040 35395 kubeadm.go:784] kubelet initialised
I0307 18:43:10.372068 35395 kubeadm.go:785] duration metric: took 10.499059ms waiting for restarted kubelet to initialise ...
I0307 18:43:10.372079 35395 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 18:43:10.378716 35395 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-n77tj" in "kube-system" namespace to be "Ready" ...
I0307 18:43:12.397973 35395 pod_ready.go:102] pod "coredns-787d4945fb-n77tj" in "kube-system" namespace has status "Ready":"False"
I0307 18:43:13.893974 35395 pod_ready.go:92] pod "coredns-787d4945fb-n77tj" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:13.894011 35395 pod_ready.go:81] duration metric: took 3.515265891s waiting for pod "coredns-787d4945fb-n77tj" in "kube-system" namespace to be "Ready" ...
I0307 18:43:13.894023 35395 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:15.906677 35395 pod_ready.go:102] pod "etcd-pause-763583" in "kube-system" namespace has status "Ready":"False"
I0307 18:43:17.907680 35395 pod_ready.go:102] pod "etcd-pause-763583" in "kube-system" namespace has status "Ready":"False"
I0307 18:43:20.408465 35395 pod_ready.go:102] pod "etcd-pause-763583" in "kube-system" namespace has status "Ready":"False"
I0307 18:43:20.928474 35395 pod_ready.go:92] pod "etcd-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:20.928514 35395 pod_ready.go:81] duration metric: took 7.034481751s waiting for pod "etcd-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.928528 35395 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.941150 35395 pod_ready.go:92] pod "kube-apiserver-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:20.941180 35395 pod_ready.go:81] duration metric: took 12.642904ms waiting for pod "kube-apiserver-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.941195 35395 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.957243 35395 pod_ready.go:92] pod "kube-controller-manager-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:20.957270 35395 pod_ready.go:81] duration metric: took 16.065823ms waiting for pod "kube-controller-manager-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.957283 35395 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-89rb5" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.965242 35395 pod_ready.go:92] pod "kube-proxy-89rb5" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:20.965267 35395 pod_ready.go:81] duration metric: took 7.976082ms waiting for pod "kube-proxy-89rb5" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.965306 35395 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.975928 35395 pod_ready.go:92] pod "kube-scheduler-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:20.975957 35395 pod_ready.go:81] duration metric: took 10.639966ms waiting for pod "kube-scheduler-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.975967 35395 pod_ready.go:38] duration metric: took 10.603878883s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 18:43:20.975987 35395 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0307 18:43:20.998476 35395 ops.go:34] apiserver oom_adj: -16
I0307 18:43:20.998505 35395 kubeadm.go:637] restartCluster took 57.365787501s
I0307 18:43:20.998514 35395 kubeadm.go:403] StartCluster complete in 57.398734635s
I0307 18:43:20.998566 35395 settings.go:142] acquiring lock: {Name:mk59ca7946d8ca96e1c1529d6dc9eeaf833467d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:43:20.998642 35395 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15985-4059/kubeconfig
I0307 18:43:21.000304 35395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-4059/kubeconfig: {Name:mkdbb63ccb2062c9fe0a4f6a1ffae1d7c12177ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:43:21.001531 35395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0307 18:43:21.001623 35395 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
I0307 18:43:21.004334 35395 out.go:177] * Enabled addons:
I0307 18:43:21.002125 35395 kapi.go:59] client config for pause-763583: &rest.Config{Host:"https://192.168.61.47:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15985-4059/.minikube/profiles/pause-763583/client.crt", KeyFile:"/home/jenkins/minikube-integration/15985-4059/.minikube/profiles/pause-763583/client.key", CAFile:"/home/jenkins/minikube-integration/15985-4059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29a5480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0307 18:43:21.002293 35395 config.go:182] Loaded profile config "pause-763583": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 18:43:21.006591 35395 cache.go:107] acquiring lock: {Name:mk4b4b9e8ae74bfe37a64a243ec4cf9219f62ba4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 18:43:21.006679 35395 cache.go:115] /home/jenkins/minikube-integration/15985-4059/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
I0307 18:43:21.006696 35395 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/15985-4059/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 119.684µs
I0307 18:43:21.006712 35395 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/15985-4059/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
I0307 18:43:21.006718 35395 cache.go:87] Successfully saved all images to host disk.
I0307 18:43:21.006956 35395 config.go:182] Loaded profile config "pause-763583": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 18:43:21.006990 35395 addons.go:499] enable addons completed in 5.361604ms: enabled=[]
I0307 18:43:21.007421 35395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0307 18:43:21.007482 35395 main.go:141] libmachine: Launching plugin server for driver kvm2
I0307 18:43:21.014221 35395 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-763583" context rescaled to 1 replicas
I0307 18:43:21.014275 35395 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.47 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0307 18:43:21.016181 35395 out.go:177] * Verifying Kubernetes components...
I0307 18:43:21.018159 35395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0307 18:43:21.027697 35395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39769
I0307 18:43:21.028242 35395 main.go:141] libmachine: () Calling .GetVersion
I0307 18:43:21.029001 35395 main.go:141] libmachine: Using API Version 1
I0307 18:43:21.029022 35395 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 18:43:21.029419 35395 main.go:141] libmachine: () Calling .GetMachineName
I0307 18:43:21.029621 35395 main.go:141] libmachine: (pause-763583) Calling .GetState
I0307 18:43:21.032078 35395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0307 18:43:21.032110 35395 main.go:141] libmachine: Launching plugin server for driver kvm2
I0307 18:43:21.055351 35395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38507
I0307 18:43:21.056115 35395 main.go:141] libmachine: () Calling .GetVersion
I0307 18:43:21.056974 35395 main.go:141] libmachine: Using API Version 1
I0307 18:43:21.057002 35395 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 18:43:21.057408 35395 main.go:141] libmachine: () Calling .GetMachineName
I0307 18:43:21.057644 35395 main.go:141] libmachine: (pause-763583) Calling .DriverName
I0307 18:43:21.057948 35395 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0307 18:43:21.057985 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHHostname
I0307 18:43:21.061960 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:43:21.062537 35395 main.go:141] libmachine: (pause-763583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7e:f8", ip: ""} in network mk-pause-763583: {Iface:virbr3 ExpiryTime:2023-03-07 19:40:49 +0000 UTC Type:0 Mac:52:54:00:7d:7e:f8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:pause-763583 Clientid:01:52:54:00:7d:7e:f8}
I0307 18:43:21.062562 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined IP address 192.168.61.47 and MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:43:21.062887 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHPort
I0307 18:43:21.064168 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHKeyPath
I0307 18:43:21.064368 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHUsername
I0307 18:43:21.064473 35395 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4059/.minikube/machines/pause-763583/id_rsa Username:docker}
I0307 18:43:21.258566 35395 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0307 18:43:21.258560 35395 node_ready.go:35] waiting up to 6m0s for node "pause-763583" to be "Ready" ...
I0307 18:43:21.258634 35395 docker.go:630] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0307 18:43:21.258659 35395 cache_images.go:84] Images are preloaded, skipping loading
I0307 18:43:21.258669 35395 cache_images.go:262] succeeded pushing to: pause-763583
I0307 18:43:21.258676 35395 cache_images.go:263] failed pushing to:
I0307 18:43:21.258695 35395 main.go:141] libmachine: Making call to close driver server
I0307 18:43:21.258709 35395 main.go:141] libmachine: (pause-763583) Calling .Close
I0307 18:43:21.259058 35395 main.go:141] libmachine: Successfully made call to close driver server
I0307 18:43:21.259078 35395 main.go:141] libmachine: Making call to close connection to plugin binary
I0307 18:43:21.259092 35395 main.go:141] libmachine: Making call to close driver server
I0307 18:43:21.259099 35395 main.go:141] libmachine: (pause-763583) Calling .Close
I0307 18:43:21.259506 35395 main.go:141] libmachine: Successfully made call to close driver server
I0307 18:43:21.259526 35395 main.go:141] libmachine: Making call to close connection to plugin binary
I0307 18:43:21.262793 35395 node_ready.go:49] node "pause-763583" has status "Ready":"True"
I0307 18:43:21.262815 35395 node_ready.go:38] duration metric: took 4.223701ms waiting for node "pause-763583" to be "Ready" ...
I0307 18:43:21.262826 35395 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 18:43:21.312065 35395 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-n77tj" in "kube-system" namespace to be "Ready" ...
I0307 18:43:21.703398 35395 pod_ready.go:92] pod "coredns-787d4945fb-n77tj" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:21.703420 35395 pod_ready.go:81] duration metric: took 391.325074ms waiting for pod "coredns-787d4945fb-n77tj" in "kube-system" namespace to be "Ready" ...
I0307 18:43:21.703430 35395 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:22.102894 35395 pod_ready.go:92] pod "etcd-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:22.102915 35395 pod_ready.go:81] duration metric: took 399.479275ms waiting for pod "etcd-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:22.102924 35395 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:22.503718 35395 pod_ready.go:92] pod "kube-apiserver-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:22.503740 35395 pod_ready.go:81] duration metric: took 400.810203ms waiting for pod "kube-apiserver-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:22.503753 35395 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:22.904142 35395 pod_ready.go:92] pod "kube-controller-manager-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:22.904164 35395 pod_ready.go:81] duration metric: took 400.403865ms waiting for pod "kube-controller-manager-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:22.904174 35395 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-89rb5" in "kube-system" namespace to be "Ready" ...
I0307 18:43:23.303244 35395 pod_ready.go:92] pod "kube-proxy-89rb5" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:23.303263 35395 pod_ready.go:81] duration metric: took 399.083446ms waiting for pod "kube-proxy-89rb5" in "kube-system" namespace to be "Ready" ...
I0307 18:43:23.303278 35395 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:23.704268 35395 pod_ready.go:92] pod "kube-scheduler-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:23.704288 35395 pod_ready.go:81] duration metric: took 401.005104ms waiting for pod "kube-scheduler-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:23.704295 35395 pod_ready.go:38] duration metric: took 2.441458878s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 18:43:23.704311 35395 api_server.go:51] waiting for apiserver process to appear ...
I0307 18:43:23.704349 35395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:43:23.716827 35395 api_server.go:71] duration metric: took 2.702510753s to wait for apiserver process to appear ...
I0307 18:43:23.716856 35395 api_server.go:87] waiting for apiserver healthz status ...
I0307 18:43:23.716868 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:43:23.723857 35395 api_server.go:278] https://192.168.61.47:8443/healthz returned 200:
ok
I0307 18:43:23.724750 35395 api_server.go:140] control plane version: v1.26.2
I0307 18:43:23.724768 35395 api_server.go:130] duration metric: took 7.905622ms to wait for apiserver health ...
I0307 18:43:23.724778 35395 system_pods.go:43] waiting for kube-system pods to appear ...
I0307 18:43:23.906704 35395 system_pods.go:59] 6 kube-system pods found
I0307 18:43:23.906737 35395 system_pods.go:61] "coredns-787d4945fb-n77tj" [e63f9141-89ed-4e4d-b1aa-86ad76074f81] Running
I0307 18:43:23.906745 35395 system_pods.go:61] "etcd-pause-763583" [1443cb3f-e768-40cb-8959-b07a77a9b089] Running
I0307 18:43:23.906752 35395 system_pods.go:61] "kube-apiserver-pause-763583" [21663669-cac7-48c2-9107-e69979cee194] Running
I0307 18:43:23.906759 35395 system_pods.go:61] "kube-controller-manager-pause-763583" [e00cf98f-3435-4f3c-b91c-c00a0b794b06] Running
I0307 18:43:23.906766 35395 system_pods.go:61] "kube-proxy-89rb5" [1976b181-14ab-48a2-bb64-2eb3b1ecf436] Running
I0307 18:43:23.906773 35395 system_pods.go:61] "kube-scheduler-pause-763583" [b495d084-0581-4e14-917f-e44a0bf077df] Running
I0307 18:43:23.906785 35395 system_pods.go:74] duration metric: took 182.000313ms to wait for pod list to return data ...
I0307 18:43:23.906799 35395 default_sa.go:34] waiting for default service account to be created ...
I0307 18:43:24.103036 35395 default_sa.go:45] found service account: "default"
I0307 18:43:24.103058 35395 default_sa.go:55] duration metric: took 196.253509ms for default service account to be created ...
I0307 18:43:24.103066 35395 system_pods.go:116] waiting for k8s-apps to be running ...
I0307 18:43:24.305992 35395 system_pods.go:86] 6 kube-system pods found
I0307 18:43:24.306020 35395 system_pods.go:89] "coredns-787d4945fb-n77tj" [e63f9141-89ed-4e4d-b1aa-86ad76074f81] Running
I0307 18:43:24.306025 35395 system_pods.go:89] "etcd-pause-763583" [1443cb3f-e768-40cb-8959-b07a77a9b089] Running
I0307 18:43:24.306029 35395 system_pods.go:89] "kube-apiserver-pause-763583" [21663669-cac7-48c2-9107-e69979cee194] Running
I0307 18:43:24.306033 35395 system_pods.go:89] "kube-controller-manager-pause-763583" [e00cf98f-3435-4f3c-b91c-c00a0b794b06] Running
I0307 18:43:24.306038 35395 system_pods.go:89] "kube-proxy-89rb5" [1976b181-14ab-48a2-bb64-2eb3b1ecf436] Running
I0307 18:43:24.306042 35395 system_pods.go:89] "kube-scheduler-pause-763583" [b495d084-0581-4e14-917f-e44a0bf077df] Running
I0307 18:43:24.306050 35395 system_pods.go:126] duration metric: took 202.978873ms to wait for k8s-apps to be running ...
I0307 18:43:24.306059 35395 system_svc.go:44] waiting for kubelet service to be running ....
I0307 18:43:24.306109 35395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0307 18:43:24.321158 35395 system_svc.go:56] duration metric: took 15.087082ms WaitForService to wait for kubelet.
I0307 18:43:24.321187 35395 kubeadm.go:578] duration metric: took 3.306875345s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0307 18:43:24.321209 35395 node_conditions.go:102] verifying NodePressure condition ...
I0307 18:43:24.505220 35395 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 18:43:24.505243 35395 node_conditions.go:123] node cpu capacity is 2
I0307 18:43:24.505252 35395 node_conditions.go:105] duration metric: took 184.038448ms to run NodePressure ...
I0307 18:43:24.505262 35395 start.go:228] waiting for startup goroutines ...
I0307 18:43:24.505268 35395 start.go:233] waiting for cluster config update ...
I0307 18:43:24.505274 35395 start.go:242] writing updated cluster config ...
I0307 18:43:24.505561 35395 ssh_runner.go:195] Run: rm -f paused
I0307 18:43:24.559116 35395 start.go:555] kubectl: 1.26.2, cluster: 1.26.2 (minor skew: 0)
I0307 18:43:24.561247 35395 out.go:177] * Done! kubectl is now configured to use "pause-763583" cluster and "default" namespace by default
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-763583 -n pause-763583
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p pause-763583 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-763583 logs -n 25: (1.274652841s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |---------|------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| start | -p gvisor-579626 --memory=2200 | gvisor-579626 | jenkins | v1.29.0 | 07 Mar 23 18:40 UTC | 07 Mar 23 18:43 UTC |
| | --container-runtime=containerd --docker-opt | | | | | |
| | containerd=/var/run/containerd/containerd.sock | | | | | |
| | --driver=kvm2 | | | | | |
| start | -p NoKubernetes-015933 | NoKubernetes-015933 | jenkins | v1.29.0 | 07 Mar 23 18:40 UTC | 07 Mar 23 18:41 UTC |
| | --no-kubernetes --driver=kvm2 | | | | | |
| delete | -p NoKubernetes-015933 | NoKubernetes-015933 | jenkins | v1.29.0 | 07 Mar 23 18:41 UTC | 07 Mar 23 18:41 UTC |
| start | -p NoKubernetes-015933 | NoKubernetes-015933 | jenkins | v1.29.0 | 07 Mar 23 18:41 UTC | 07 Mar 23 18:42 UTC |
| | --no-kubernetes --driver=kvm2 | | | | | |
| start | -p pause-763583 | pause-763583 | jenkins | v1.29.0 | 07 Mar 23 18:41 UTC | 07 Mar 23 18:43 UTC |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=kvm2 | | | | | |
| ssh | -p NoKubernetes-015933 sudo | NoKubernetes-015933 | jenkins | v1.29.0 | 07 Mar 23 18:42 UTC | |
| | systemctl is-active --quiet | | | | | |
| | service kubelet | | | | | |
| start | -p cert-expiration-147721 | cert-expiration-147721 | jenkins | v1.29.0 | 07 Mar 23 18:42 UTC | |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=kvm2 | | | | | |
| stop | -p NoKubernetes-015933 | NoKubernetes-015933 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | 07 Mar 23 18:43 UTC |
| start | -p NoKubernetes-015933 | NoKubernetes-015933 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | --driver=kvm2 | | | | | |
| delete | -p gvisor-579626 | gvisor-579626 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | 07 Mar 23 18:43 UTC |
| ssh | -p cilium-114236 sudo cat | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | /etc/nsswitch.conf | | | | | |
| ssh | -p cilium-114236 sudo cat | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | /etc/hosts | | | | | |
| ssh | -p cilium-114236 sudo cat | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | /etc/resolv.conf | | | | | |
| ssh | -p cilium-114236 sudo crictl | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | pods | | | | | |
| ssh | -p cilium-114236 sudo crictl | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | ps --all | | | | | |
| ssh | -p cilium-114236 sudo find | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | /etc/cni -type f -exec sh -c | | | | | |
| | 'echo {}; cat {}' \; | | | | | |
| ssh | -p cilium-114236 sudo ip a s | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| ssh | -p cilium-114236 sudo ip r s | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| ssh | -p cilium-114236 sudo | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | iptables-save | | | | | |
| ssh | -p cilium-114236 sudo iptables | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | -t nat -L -n -v | | | | | |
| ssh | -p cilium-114236 sudo | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | systemctl status kubelet --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p cilium-114236 sudo | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | systemctl cat kubelet | | | | | |
| | --no-pager | | | | | |
| ssh | -p cilium-114236 sudo | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | journalctl -xeu kubelet --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p cilium-114236 sudo cat | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | /etc/kubernetes/kubelet.conf | | | | | |
| ssh | -p cilium-114236 sudo cat | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | /var/lib/kubelet/config.yaml | | | | | |
|---------|------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/03/07 18:43:05
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.20.1 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0307 18:43:05.371585 35824 out.go:296] Setting OutFile to fd 1 ...
I0307 18:43:05.371756 35824 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0307 18:43:05.371759 35824 out.go:309] Setting ErrFile to fd 2...
I0307 18:43:05.371763 35824 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0307 18:43:05.371866 35824 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15985-4059/.minikube/bin
I0307 18:43:05.372454 35824 out.go:303] Setting JSON to false
I0307 18:43:05.373403 35824 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5138,"bootTime":1678209448,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0307 18:43:05.373476 35824 start.go:135] virtualization: kvm guest
I0307 18:43:05.376845 35824 out.go:177] * [NoKubernetes-015933] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0307 18:43:05.378361 35824 out.go:177] - MINIKUBE_LOCATION=15985
I0307 18:43:05.378425 35824 notify.go:220] Checking for updates...
I0307 18:43:05.379782 35824 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0307 18:43:05.381143 35824 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15985-4059/kubeconfig
I0307 18:43:05.382526 35824 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-4059/.minikube
I0307 18:43:05.383873 35824 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0307 18:43:05.385287 35824 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0307 18:43:05.387069 35824 config.go:182] Loaded profile config "NoKubernetes-015933": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
I0307 18:43:05.387659 35824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0307 18:43:05.387714 35824 main.go:141] libmachine: Launching plugin server for driver kvm2
I0307 18:43:05.407372 35824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34093
I0307 18:43:05.407784 35824 main.go:141] libmachine: () Calling .GetVersion
I0307 18:43:05.408457 35824 main.go:141] libmachine: Using API Version 1
I0307 18:43:05.408478 35824 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 18:43:05.408795 35824 main.go:141] libmachine: () Calling .GetMachineName
I0307 18:43:05.408996 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .DriverName
I0307 18:43:05.409176 35824 start.go:1652] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
I0307 18:43:05.409213 35824 driver.go:365] Setting default libvirt URI to qemu:///system
I0307 18:43:05.409657 35824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0307 18:43:05.409696 35824 main.go:141] libmachine: Launching plugin server for driver kvm2
I0307 18:43:05.424475 35824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35675
I0307 18:43:05.424900 35824 main.go:141] libmachine: () Calling .GetVersion
I0307 18:43:05.425494 35824 main.go:141] libmachine: Using API Version 1
I0307 18:43:05.425520 35824 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 18:43:05.425845 35824 main.go:141] libmachine: () Calling .GetMachineName
I0307 18:43:05.426029 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .DriverName
I0307 18:43:05.464875 35824 out.go:177] * Using the kvm2 driver based on existing profile
I0307 18:43:05.466733 35824 start.go:296] selected driver: kvm2
I0307 18:43:05.466741 35824 start.go:857] validating driver "kvm2" against &{Name:NoKubernetes-015933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-015933 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.31 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0307 18:43:05.466877 35824 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0307 18:43:05.467160 35824 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 18:43:05.467237 35824 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15985-4059/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0307 18:43:05.483029 35824 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
I0307 18:43:05.484097 35824 cni.go:84] Creating CNI manager for ""
I0307 18:43:05.484123 35824 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0307 18:43:05.484134 35824 start_flags.go:319] config:
{Name:NoKubernetes-015933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-015933 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.31 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0307 18:43:05.484298 35824 iso.go:125] acquiring lock: {Name:mkf75c329a61b8189e3f3e4bd561d5125dafa20c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 18:43:05.486734 35824 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-015933
I0307 18:43:05.488359 35824 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime docker
W0307 18:43:05.518932 35824 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-docker-overlay2-amd64.tar.lz4 status code: 404
I0307 18:43:05.519140 35824 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/NoKubernetes-015933/config.json ...
I0307 18:43:05.519434 35824 cache.go:193] Successfully downloaded all kic artifacts
I0307 18:43:05.519476 35824 start.go:364] acquiring machines lock for NoKubernetes-015933: {Name:mkdc620a3744ce597744f8ea42dba23b3f56e106 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0307 18:43:05.519550 35824 start.go:368] acquired machines lock for "NoKubernetes-015933" in 47.24µs
I0307 18:43:05.519567 35824 start.go:96] Skipping create...Using existing machine configuration
I0307 18:43:05.519572 35824 fix.go:55] fixHost starting:
I0307 18:43:05.519930 35824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0307 18:43:05.519977 35824 main.go:141] libmachine: Launching plugin server for driver kvm2
I0307 18:43:05.534522 35824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42689
I0307 18:43:05.535027 35824 main.go:141] libmachine: () Calling .GetVersion
I0307 18:43:05.535599 35824 main.go:141] libmachine: Using API Version 1
I0307 18:43:05.535613 35824 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 18:43:05.535938 35824 main.go:141] libmachine: () Calling .GetMachineName
I0307 18:43:05.536115 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .DriverName
I0307 18:43:05.536284 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetState
I0307 18:43:05.538277 35824 fix.go:103] recreateIfNeeded on NoKubernetes-015933: state=Stopped err=<nil>
I0307 18:43:05.538297 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .DriverName
W0307 18:43:05.538459 35824 fix.go:129] unexpected machine state, will restart: <nil>
I0307 18:43:05.540765 35824 out.go:177] * Restarting existing kvm2 VM for "NoKubernetes-015933" ...
I0307 18:43:06.001163 34732 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/gvisor-addon_2: (5.969995878s)
I0307 18:43:06.001181 34732 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15985-4059/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 from cache
I0307 18:43:06.001209 34732 cache_images.go:123] Successfully loaded all cached images
I0307 18:43:06.001218 34732 cache_images.go:92] LoadImages completed in 7.201299799s
I0307 18:43:06.001224 34732 cache_images.go:262] succeeded pushing to: gvisor-579626
I0307 18:43:06.001228 34732 cache_images.go:263] failed pushing to:
I0307 18:43:06.001250 34732 main.go:141] libmachine: Making call to close driver server
I0307 18:43:06.001260 34732 main.go:141] libmachine: (gvisor-579626) Calling .Close
I0307 18:43:06.001545 34732 main.go:141] libmachine: Successfully made call to close driver server
I0307 18:43:06.001566 34732 main.go:141] libmachine: Making call to close connection to plugin binary
I0307 18:43:06.001577 34732 main.go:141] libmachine: Making call to close driver server
I0307 18:43:06.001586 34732 main.go:141] libmachine: (gvisor-579626) Calling .Close
I0307 18:43:06.001783 34732 main.go:141] libmachine: Successfully made call to close driver server
I0307 18:43:06.001801 34732 main.go:141] libmachine: Making call to close connection to plugin binary
I0307 18:43:06.001818 34732 start.go:233] waiting for cluster config update ...
I0307 18:43:06.001830 34732 start.go:242] writing updated cluster config ...
I0307 18:43:06.002146 34732 ssh_runner.go:195] Run: rm -f paused
I0307 18:43:06.065245 34732 start.go:555] kubectl: 1.26.2, cluster: 1.26.2 (minor skew: 0)
I0307 18:43:06.067757 34732 out.go:177] * Done! kubectl is now configured to use "gvisor-579626" cluster and "default" namespace by default
I0307 18:43:05.844929 35395 api_server.go:268] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:43:06.345721 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:43:05.939340 35647 ssh_runner.go:235] Completed: sudo systemctl restart docker: (8.80125487s)
I0307 18:43:05.939404 35647 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0307 18:43:06.065297 35647 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0307 18:43:06.193363 35647 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0307 18:43:06.332683 35647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 18:43:06.469120 35647 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0307 18:43:06.495698 35647 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0307 18:43:06.495762 35647 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0307 18:43:06.505061 35647 start.go:553] Will wait 60s for crictl version
I0307 18:43:06.505123 35647 ssh_runner.go:195] Run: which crictl
I0307 18:43:06.510228 35647 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0307 18:43:06.627109 35647 start.go:569] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.23
RuntimeApiVersion: v1alpha2
I0307 18:43:06.627173 35647 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0307 18:43:06.670729 35647 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0307 18:43:06.716707 35647 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 20.10.23 ...
I0307 18:43:06.716766 35647 main.go:141] libmachine: (cert-expiration-147721) Calling .GetIP
I0307 18:43:06.719717 35647 main.go:141] libmachine: (cert-expiration-147721) DBG | domain cert-expiration-147721 has defined MAC address 52:54:00:6a:e4:fa in network mk-cert-expiration-147721
I0307 18:43:06.720084 35647 main.go:141] libmachine: (cert-expiration-147721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:e4:fa", ip: ""} in network mk-cert-expiration-147721: {Iface:virbr4 ExpiryTime:2023-03-07 19:39:04 +0000 UTC Type:0 Mac:52:54:00:6a:e4:fa Iaid: IPaddr:192.168.72.251 Prefix:24 Hostname:cert-expiration-147721 Clientid:01:52:54:00:6a:e4:fa}
I0307 18:43:06.720107 35647 main.go:141] libmachine: (cert-expiration-147721) DBG | domain cert-expiration-147721 has defined IP address 192.168.72.251 and MAC address 52:54:00:6a:e4:fa in network mk-cert-expiration-147721
I0307 18:43:06.720307 35647 ssh_runner.go:195] Run: grep 192.168.72.1 host.minikube.internal$ /etc/hosts
I0307 18:43:06.726347 35647 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0307 18:43:06.726416 35647 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0307 18:43:06.755996 35647 docker.go:630] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0307 18:43:06.756010 35647 docker.go:560] Images already preloaded, skipping extraction
I0307 18:43:06.756073 35647 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0307 18:43:06.787617 35647 docker.go:630] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0307 18:43:06.787630 35647 cache_images.go:84] Images are preloaded, skipping loading
I0307 18:43:06.787697 35647 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0307 18:43:06.828544 35647 cni.go:84] Creating CNI manager for ""
I0307 18:43:06.828567 35647 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0307 18:43:06.828577 35647 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0307 18:43:06.828595 35647 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.251 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-147721 NodeName:cert-expiration-147721 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0307 18:43:06.828772 35647 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.72.251
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "cert-expiration-147721"
kubeletExtraArgs:
node-ip: 192.168.72.251
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.72.251"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0307 18:43:06.828862 35647 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=cert-expiration-147721 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.251
[Install]
config:
{KubernetesVersion:v1.26.2 ClusterName:cert-expiration-147721 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0307 18:43:06.828953 35647 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
I0307 18:43:06.841781 35647 binaries.go:44] Found k8s binaries, skipping transfer
I0307 18:43:06.841850 35647 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0307 18:43:06.851383 35647 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (456 bytes)
I0307 18:43:06.869504 35647 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0307 18:43:06.887699 35647 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
I0307 18:43:06.907514 35647 ssh_runner.go:195] Run: grep 192.168.72.251 control-plane.minikube.internal$ /etc/hosts
I0307 18:43:06.911814 35647 certs.go:56] Setting up /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721 for IP: 192.168.72.251
I0307 18:43:06.911839 35647 certs.go:186] acquiring lock for shared ca certs: {Name:mk09f52d1213ecfb949f8e2d1f9b4b7cd7194c22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:43:06.912023 35647 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15985-4059/.minikube/ca.key
I0307 18:43:06.912090 35647 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15985-4059/.minikube/proxy-client-ca.key
W0307 18:43:06.912259 35647 out.go:239] ! Certificate client.crt has expired. Generating a new one...
I0307 18:43:06.912285 35647 certs.go:540] cert expired /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/client.crt: expiration: 2023-03-07 18:42:27 +0000 UTC, now: 2023-03-07 18:43:06.912279559 +0000 UTC m=+12.649020806
I0307 18:43:06.912412 35647 certs.go:315] generating minikube-user signed cert: /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/client.key
I0307 18:43:06.912430 35647 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/client.crt with IP's: []
I0307 18:43:07.238692 35647 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/client.crt ...
I0307 18:43:07.238705 35647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/client.crt: {Name:mk7dd6d137a9fac9aa9dc5b8ed2cee5115af5368 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:43:07.238877 35647 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/client.key ...
I0307 18:43:07.238885 35647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/client.key: {Name:mk3e0e628efa5eede096b1091f5bbf3375f267b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
W0307 18:43:07.239165 35647 out.go:239] ! Certificate apiserver.crt.68790b64 has expired. Generating a new one...
I0307 18:43:07.239193 35647 certs.go:540] cert expired /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.crt.68790b64: expiration: 2023-03-07 18:42:28 +0000 UTC, now: 2023-03-07 18:43:07.239186072 +0000 UTC m=+12.975927317
I0307 18:43:07.239304 35647 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.key.68790b64
I0307 18:43:07.239316 35647 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.crt.68790b64 with IP's: [192.168.72.251 10.96.0.1 127.0.0.1 10.0.0.1]
I0307 18:43:07.393175 35647 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.crt.68790b64 ...
I0307 18:43:07.393188 35647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.crt.68790b64: {Name:mk35e086ffb5fbed77fb9b8f548e75dd765b6d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:43:07.393315 35647 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.key.68790b64 ...
I0307 18:43:07.393327 35647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.key.68790b64: {Name:mkf94da4452cbc1aea028a2f24482af737c1be79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:43:07.393386 35647 certs.go:333] copying /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.crt.68790b64 -> /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.crt
I0307 18:43:07.393548 35647 certs.go:337] copying /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.key.68790b64 -> /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.key
W0307 18:43:07.393782 35647 out.go:239] ! Certificate proxy-client.crt has expired. Generating a new one...
I0307 18:43:07.393803 35647 certs.go:540] cert expired /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/proxy-client.crt: expiration: 2023-03-07 18:42:28 +0000 UTC, now: 2023-03-07 18:43:07.393798378 +0000 UTC m=+13.130539625
I0307 18:43:07.393879 35647 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/proxy-client.key
I0307 18:43:07.393889 35647 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/proxy-client.crt with IP's: []
I0307 18:43:07.624908 35647 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/proxy-client.crt ...
I0307 18:43:07.624924 35647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/proxy-client.crt: {Name:mkb44c5bbc9dd5fd01183cdbb904d2b334c279c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:43:07.625080 35647 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/proxy-client.key ...
I0307 18:43:07.625089 35647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/proxy-client.key: {Name:mk9bc3c6f41698598c96e2c121ea1e42e977e6ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:43:07.625296 35647 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4059/.minikube/certs/home/jenkins/minikube-integration/15985-4059/.minikube/certs/11114.pem (1338 bytes)
W0307 18:43:07.625334 35647 certs.go:397] ignoring /home/jenkins/minikube-integration/15985-4059/.minikube/certs/home/jenkins/minikube-integration/15985-4059/.minikube/certs/11114_empty.pem, impossibly tiny 0 bytes
I0307 18:43:07.625340 35647 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4059/.minikube/certs/home/jenkins/minikube-integration/15985-4059/.minikube/certs/ca-key.pem (1675 bytes)
I0307 18:43:07.625361 35647 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4059/.minikube/certs/home/jenkins/minikube-integration/15985-4059/.minikube/certs/ca.pem (1078 bytes)
I0307 18:43:07.625380 35647 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4059/.minikube/certs/home/jenkins/minikube-integration/15985-4059/.minikube/certs/cert.pem (1123 bytes)
I0307 18:43:07.625399 35647 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4059/.minikube/certs/home/jenkins/minikube-integration/15985-4059/.minikube/certs/key.pem (1675 bytes)
I0307 18:43:07.625437 35647 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4059/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15985-4059/.minikube/files/etc/ssl/certs/111142.pem (1708 bytes)
I0307 18:43:07.625991 35647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0307 18:43:07.654818 35647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0307 18:43:07.680164 35647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0307 18:43:07.710330 35647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0307 18:43:07.739095 35647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0307 18:43:07.764762 35647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0307 18:43:07.795399 35647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0307 18:43:07.820196 35647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0307 18:43:07.847252 35647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/certs/11114.pem --> /usr/share/ca-certificates/11114.pem (1338 bytes)
I0307 18:43:07.873684 35647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/files/etc/ssl/certs/111142.pem --> /usr/share/ca-certificates/111142.pem (1708 bytes)
I0307 18:43:07.900738 35647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0307 18:43:07.935826 35647 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0307 18:43:07.953838 35647 ssh_runner.go:195] Run: openssl version
I0307 18:43:07.960262 35647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0307 18:43:07.972972 35647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0307 18:43:07.978184 35647 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 7 18:02 /usr/share/ca-certificates/minikubeCA.pem
I0307 18:43:07.978257 35647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0307 18:43:07.984561 35647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0307 18:43:07.994205 35647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11114.pem && ln -fs /usr/share/ca-certificates/11114.pem /etc/ssl/certs/11114.pem"
I0307 18:43:08.004804 35647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11114.pem
I0307 18:43:08.010140 35647 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 7 18:06 /usr/share/ca-certificates/11114.pem
I0307 18:43:08.010190 35647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11114.pem
I0307 18:43:08.016237 35647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11114.pem /etc/ssl/certs/51391683.0"
I0307 18:43:08.025251 35647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111142.pem && ln -fs /usr/share/ca-certificates/111142.pem /etc/ssl/certs/111142.pem"
I0307 18:43:08.035839 35647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111142.pem
I0307 18:43:08.041071 35647 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 7 18:06 /usr/share/ca-certificates/111142.pem
I0307 18:43:08.041115 35647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111142.pem
I0307 18:43:08.047366 35647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111142.pem /etc/ssl/certs/3ec20f2e.0"
I0307 18:43:08.066653 35647 kubeadm.go:401] StartCluster: {Name:cert-expiration-147721 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.26.2 ClusterName:cert-expiration-147721 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.251 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0307 18:43:08.066797 35647 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0307 18:43:08.181439 35647 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0307 18:43:08.213439 35647 kubeadm.go:416] found existing configuration files, will attempt cluster restart
I0307 18:43:08.213450 35647 kubeadm.go:633] restartCluster start
I0307 18:43:08.213506 35647 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0307 18:43:08.251497 35647 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0307 18:43:08.252487 35647 kubeconfig.go:92] found "cert-expiration-147721" server: "https://192.168.72.251:8443"
I0307 18:43:08.255052 35647 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0307 18:43:08.269434 35647 api_server.go:165] Checking apiserver status ...
I0307 18:43:08.269493 35647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:43:08.319926 35647 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:43:08.820581 35647 api_server.go:165] Checking apiserver status ...
I0307 18:43:08.820650 35647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:43:08.841340 35647 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:43:08.331961 35395 api_server.go:278] https://192.168.61.47:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0307 18:43:08.331991 35395 api_server.go:102] status: https://192.168.61.47:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0307 18:43:08.345106 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:43:08.419012 35395 api_server.go:278] https://192.168.61.47:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0307 18:43:08.419042 35395 api_server.go:102] status: https://192.168.61.47:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0307 18:43:08.845575 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:43:08.851385 35395 api_server.go:278] https://192.168.61.47:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0307 18:43:08.851415 35395 api_server.go:102] status: https://192.168.61.47:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0307 18:43:09.345055 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:43:09.357002 35395 api_server.go:278] https://192.168.61.47:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0307 18:43:09.357043 35395 api_server.go:102] status: https://192.168.61.47:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0307 18:43:09.845368 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:43:09.852655 35395 api_server.go:278] https://192.168.61.47:8443/healthz returned 200:
ok
I0307 18:43:09.864843 35395 api_server.go:140] control plane version: v1.26.2
I0307 18:43:09.864865 35395 api_server.go:130] duration metric: took 9.020231544s to wait for apiserver health ...
I0307 18:43:09.864873 35395 cni.go:84] Creating CNI manager for ""
I0307 18:43:09.864883 35395 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0307 18:43:09.866920 35395 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0307 18:43:05.542221 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .Start
I0307 18:43:05.542407 35824 main.go:141] libmachine: (NoKubernetes-015933) Ensuring networks are active...
I0307 18:43:05.543087 35824 main.go:141] libmachine: (NoKubernetes-015933) Ensuring network default is active
I0307 18:43:05.543533 35824 main.go:141] libmachine: (NoKubernetes-015933) Ensuring network mk-NoKubernetes-015933 is active
I0307 18:43:05.543944 35824 main.go:141] libmachine: (NoKubernetes-015933) Getting domain xml...
I0307 18:43:05.544794 35824 main.go:141] libmachine: (NoKubernetes-015933) Creating domain...
I0307 18:43:06.993870 35824 main.go:141] libmachine: (NoKubernetes-015933) Waiting to get IP...
I0307 18:43:06.994827 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:06.995260 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:06.995381 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:06.995259 35858 retry.go:31] will retry after 236.851886ms: waiting for machine to come up
I0307 18:43:07.233789 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:07.234431 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:07.234452 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:07.234380 35858 retry.go:31] will retry after 278.375019ms: waiting for machine to come up
I0307 18:43:07.515011 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:07.515526 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:07.515549 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:07.515494 35858 retry.go:31] will retry after 400.10884ms: waiting for machine to come up
I0307 18:43:07.919862 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:07.920356 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:07.920379 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:07.920282 35858 retry.go:31] will retry after 473.496382ms: waiting for machine to come up
I0307 18:43:08.394991 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:08.395902 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:08.395925 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:08.395747 35858 retry.go:31] will retry after 718.678081ms: waiting for machine to come up
I0307 18:43:09.116025 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:09.116516 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:09.116534 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:09.116467 35858 retry.go:31] will retry after 712.04316ms: waiting for machine to come up
I0307 18:43:09.830438 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:09.831101 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:09.831116 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:09.831034 35858 retry.go:31] will retry after 815.034437ms: waiting for machine to come up
I0307 18:43:09.868328 35395 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0307 18:43:09.880058 35395 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0307 18:43:09.905795 35395 system_pods.go:43] waiting for kube-system pods to appear ...
I0307 18:43:09.917255 35395 system_pods.go:59] 6 kube-system pods found
I0307 18:43:09.917295 35395 system_pods.go:61] "coredns-787d4945fb-n77tj" [e63f9141-89ed-4e4d-b1aa-86ad76074f81] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0307 18:43:09.917306 35395 system_pods.go:61] "etcd-pause-763583" [1443cb3f-e768-40cb-8959-b07a77a9b089] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0307 18:43:09.917314 35395 system_pods.go:61] "kube-apiserver-pause-763583" [21663669-cac7-48c2-9107-e69979cee194] Running
I0307 18:43:09.917324 35395 system_pods.go:61] "kube-controller-manager-pause-763583" [e00cf98f-3435-4f3c-b91c-c00a0b794b06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0307 18:43:09.917331 35395 system_pods.go:61] "kube-proxy-89rb5" [1976b181-14ab-48a2-bb64-2eb3b1ecf436] Running
I0307 18:43:09.917340 35395 system_pods.go:61] "kube-scheduler-pause-763583" [b495d084-0581-4e14-917f-e44a0bf077df] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0307 18:43:09.917348 35395 system_pods.go:74] duration metric: took 11.53195ms to wait for pod list to return data ...
I0307 18:43:09.917360 35395 node_conditions.go:102] verifying NodePressure condition ...
I0307 18:43:09.921137 35395 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 18:43:09.921167 35395 node_conditions.go:123] node cpu capacity is 2
I0307 18:43:09.921179 35395 node_conditions.go:105] duration metric: took 3.813699ms to run NodePressure ...
I0307 18:43:09.921196 35395 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0307 18:43:10.361537 35395 kubeadm.go:769] waiting for restarted kubelet to initialise ...
I0307 18:43:10.372040 35395 kubeadm.go:784] kubelet initialised
I0307 18:43:10.372068 35395 kubeadm.go:785] duration metric: took 10.499059ms waiting for restarted kubelet to initialise ...
I0307 18:43:10.372079 35395 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 18:43:10.378716 35395 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-n77tj" in "kube-system" namespace to be "Ready" ...
I0307 18:43:09.320740 35647 api_server.go:165] Checking apiserver status ...
I0307 18:43:09.320817 35647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:43:09.351484 35647 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:43:09.820591 35647 api_server.go:165] Checking apiserver status ...
I0307 18:43:09.820663 35647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:43:09.839689 35647 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:43:10.320286 35647 api_server.go:165] Checking apiserver status ...
I0307 18:43:10.320357 35647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:43:10.345958 35647 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6839/cgroup
I0307 18:43:10.368813 35647 api_server.go:181] apiserver freezer: "6:freezer:/kubepods/burstable/pod27f9e58fc6ec6edf1ea39105aa6696fa/6c03d90a8397a8fa5aa39be0711b590bb5b798d9382f75eed513cd2c1fa9ce4c"
I0307 18:43:10.368868 35647 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod27f9e58fc6ec6edf1ea39105aa6696fa/6c03d90a8397a8fa5aa39be0711b590bb5b798d9382f75eed513cd2c1fa9ce4c/freezer.state
I0307 18:43:10.392712 35647 api_server.go:203] freezer state: "THAWED"
I0307 18:43:10.392729 35647 api_server.go:252] Checking apiserver healthz at https://192.168.72.251:8443/healthz ...
I0307 18:43:10.393233 35647 api_server.go:268] stopped: https://192.168.72.251:8443/healthz: Get "https://192.168.72.251:8443/healthz": dial tcp 192.168.72.251:8443: connect: connection refused
I0307 18:43:10.393292 35647 retry.go:31] will retry after 269.021468ms: state is "Stopped"
I0307 18:43:10.662692 35647 api_server.go:252] Checking apiserver healthz at https://192.168.72.251:8443/healthz ...
I0307 18:43:10.648260 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:10.648812 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:10.648836 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:10.648763 35858 retry.go:31] will retry after 902.381464ms: waiting for machine to come up
I0307 18:43:11.552569 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:11.553001 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:11.553050 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:11.552975 35858 retry.go:31] will retry after 1.729563855s: waiting for machine to come up
I0307 18:43:13.284547 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:13.285003 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:13.285020 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:13.284952 35858 retry.go:31] will retry after 1.828287492s: waiting for machine to come up
I0307 18:43:15.115893 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:15.116428 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:15.116451 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:15.116366 35858 retry.go:31] will retry after 2.036951585s: waiting for machine to come up
I0307 18:43:12.397973 35395 pod_ready.go:102] pod "coredns-787d4945fb-n77tj" in "kube-system" namespace has status "Ready":"False"
I0307 18:43:13.893974 35395 pod_ready.go:92] pod "coredns-787d4945fb-n77tj" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:13.894011 35395 pod_ready.go:81] duration metric: took 3.515265891s waiting for pod "coredns-787d4945fb-n77tj" in "kube-system" namespace to be "Ready" ...
I0307 18:43:13.894023 35395 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:15.906677 35395 pod_ready.go:102] pod "etcd-pause-763583" in "kube-system" namespace has status "Ready":"False"
I0307 18:43:15.663672 35647 api_server.go:268] stopped: https://192.168.72.251:8443/healthz: Get "https://192.168.72.251:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:43:15.663706 35647 retry.go:31] will retry after 291.952467ms: state is "Stopped"
I0307 18:43:15.956265 35647 api_server.go:252] Checking apiserver healthz at https://192.168.72.251:8443/healthz ...
I0307 18:43:17.154512 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:17.155039 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:17.155060 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:17.154983 35858 retry.go:31] will retry after 3.605137674s: waiting for machine to come up
I0307 18:43:17.907680 35395 pod_ready.go:102] pod "etcd-pause-763583" in "kube-system" namespace has status "Ready":"False"
I0307 18:43:20.408465 35395 pod_ready.go:102] pod "etcd-pause-763583" in "kube-system" namespace has status "Ready":"False"
I0307 18:43:20.928474 35395 pod_ready.go:92] pod "etcd-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:20.928514 35395 pod_ready.go:81] duration metric: took 7.034481751s waiting for pod "etcd-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.928528 35395 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.941150 35395 pod_ready.go:92] pod "kube-apiserver-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:20.941180 35395 pod_ready.go:81] duration metric: took 12.642904ms waiting for pod "kube-apiserver-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.941195 35395 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.957243 35395 pod_ready.go:92] pod "kube-controller-manager-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:20.957270 35395 pod_ready.go:81] duration metric: took 16.065823ms waiting for pod "kube-controller-manager-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.957283 35395 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-89rb5" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.965242 35395 pod_ready.go:92] pod "kube-proxy-89rb5" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:20.965267 35395 pod_ready.go:81] duration metric: took 7.976082ms waiting for pod "kube-proxy-89rb5" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.965306 35395 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.975928 35395 pod_ready.go:92] pod "kube-scheduler-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:20.975957 35395 pod_ready.go:81] duration metric: took 10.639966ms waiting for pod "kube-scheduler-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.975967 35395 pod_ready.go:38] duration metric: took 10.603878883s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 18:43:20.975987 35395 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0307 18:43:20.998476 35395 ops.go:34] apiserver oom_adj: -16
I0307 18:43:20.998505 35395 kubeadm.go:637] restartCluster took 57.365787501s
I0307 18:43:20.998514 35395 kubeadm.go:403] StartCluster complete in 57.398734635s
I0307 18:43:20.998566 35395 settings.go:142] acquiring lock: {Name:mk59ca7946d8ca96e1c1529d6dc9eeaf833467d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:43:20.998642 35395 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15985-4059/kubeconfig
I0307 18:43:21.000304 35395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-4059/kubeconfig: {Name:mkdbb63ccb2062c9fe0a4f6a1ffae1d7c12177ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:43:21.001531 35395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0307 18:43:21.001623 35395 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
I0307 18:43:21.004334 35395 out.go:177] * Enabled addons:
I0307 18:43:21.002125 35395 kapi.go:59] client config for pause-763583: &rest.Config{Host:"https://192.168.61.47:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15985-4059/.minikube/profiles/pause-763583/client.crt", KeyFile:"/home/jenkins/minikube-integration/15985-4059/.minikube/profiles/pause-763583/client.key", CAFile:"/home/jenkins/minikube-integration/15985-4059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29a5480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0307 18:43:21.002293 35395 config.go:182] Loaded profile config "pause-763583": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 18:43:21.006591 35395 cache.go:107] acquiring lock: {Name:mk4b4b9e8ae74bfe37a64a243ec4cf9219f62ba4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 18:43:21.006679 35395 cache.go:115] /home/jenkins/minikube-integration/15985-4059/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
I0307 18:43:21.006696 35395 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/15985-4059/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 119.684µs
I0307 18:43:21.006712 35395 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/15985-4059/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
I0307 18:43:21.006718 35395 cache.go:87] Successfully saved all images to host disk.
I0307 18:43:21.006956 35395 config.go:182] Loaded profile config "pause-763583": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 18:43:21.006990 35395 addons.go:499] enable addons completed in 5.361604ms: enabled=[]
I0307 18:43:21.007421 35395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0307 18:43:21.007482 35395 main.go:141] libmachine: Launching plugin server for driver kvm2
I0307 18:43:21.014221 35395 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-763583" context rescaled to 1 replicas
I0307 18:43:21.014275 35395 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.47 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0307 18:43:21.016181 35395 out.go:177] * Verifying Kubernetes components...
I0307 18:43:21.018159 35395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0307 18:43:21.027697 35395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39769
I0307 18:43:21.028242 35395 main.go:141] libmachine: () Calling .GetVersion
I0307 18:43:21.029001 35395 main.go:141] libmachine: Using API Version 1
I0307 18:43:21.029022 35395 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 18:43:21.029419 35395 main.go:141] libmachine: () Calling .GetMachineName
I0307 18:43:21.029621 35395 main.go:141] libmachine: (pause-763583) Calling .GetState
I0307 18:43:21.032078 35395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0307 18:43:21.032110 35395 main.go:141] libmachine: Launching plugin server for driver kvm2
I0307 18:43:21.055351 35395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38507
I0307 18:43:21.056115 35395 main.go:141] libmachine: () Calling .GetVersion
I0307 18:43:21.056974 35395 main.go:141] libmachine: Using API Version 1
I0307 18:43:21.057002 35395 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 18:43:21.057408 35395 main.go:141] libmachine: () Calling .GetMachineName
I0307 18:43:21.057644 35395 main.go:141] libmachine: (pause-763583) Calling .DriverName
I0307 18:43:21.057948 35395 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0307 18:43:21.057985 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHHostname
I0307 18:43:21.061960 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:43:21.062537 35395 main.go:141] libmachine: (pause-763583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7e:f8", ip: ""} in network mk-pause-763583: {Iface:virbr3 ExpiryTime:2023-03-07 19:40:49 +0000 UTC Type:0 Mac:52:54:00:7d:7e:f8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:pause-763583 Clientid:01:52:54:00:7d:7e:f8}
I0307 18:43:21.062562 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined IP address 192.168.61.47 and MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:43:21.062887 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHPort
I0307 18:43:21.064168 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHKeyPath
I0307 18:43:21.064368 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHUsername
I0307 18:43:21.064473 35395 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4059/.minikube/machines/pause-763583/id_rsa Username:docker}
I0307 18:43:21.258566 35395 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0307 18:43:21.258560 35395 node_ready.go:35] waiting up to 6m0s for node "pause-763583" to be "Ready" ...
I0307 18:43:21.258634 35395 docker.go:630] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0307 18:43:21.258659 35395 cache_images.go:84] Images are preloaded, skipping loading
I0307 18:43:21.258669 35395 cache_images.go:262] succeeded pushing to: pause-763583
I0307 18:43:21.258676 35395 cache_images.go:263] failed pushing to:
I0307 18:43:21.258695 35395 main.go:141] libmachine: Making call to close driver server
I0307 18:43:21.258709 35395 main.go:141] libmachine: (pause-763583) Calling .Close
I0307 18:43:21.259058 35395 main.go:141] libmachine: Successfully made call to close driver server
I0307 18:43:21.259078 35395 main.go:141] libmachine: Making call to close connection to plugin binary
I0307 18:43:21.259092 35395 main.go:141] libmachine: Making call to close driver server
I0307 18:43:21.259099 35395 main.go:141] libmachine: (pause-763583) Calling .Close
I0307 18:43:21.259506 35395 main.go:141] libmachine: Successfully made call to close driver server
I0307 18:43:21.259526 35395 main.go:141] libmachine: Making call to close connection to plugin binary
I0307 18:43:21.262793 35395 node_ready.go:49] node "pause-763583" has status "Ready":"True"
I0307 18:43:21.262815 35395 node_ready.go:38] duration metric: took 4.223701ms waiting for node "pause-763583" to be "Ready" ...
I0307 18:43:21.262826 35395 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 18:43:21.312065 35395 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-n77tj" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.957534 35647 api_server.go:268] stopped: https://192.168.72.251:8443/healthz: Get "https://192.168.72.251:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:43:20.957562 35647 retry.go:31] will retry after 326.602628ms: state is "Stopped"
I0307 18:43:21.285042 35647 api_server.go:252] Checking apiserver healthz at https://192.168.72.251:8443/healthz ...
I0307 18:43:21.703398 35395 pod_ready.go:92] pod "coredns-787d4945fb-n77tj" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:21.703420 35395 pod_ready.go:81] duration metric: took 391.325074ms waiting for pod "coredns-787d4945fb-n77tj" in "kube-system" namespace to be "Ready" ...
I0307 18:43:21.703430 35395 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:22.102894 35395 pod_ready.go:92] pod "etcd-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:22.102915 35395 pod_ready.go:81] duration metric: took 399.479275ms waiting for pod "etcd-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:22.102924 35395 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:22.503718 35395 pod_ready.go:92] pod "kube-apiserver-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:22.503740 35395 pod_ready.go:81] duration metric: took 400.810203ms waiting for pod "kube-apiserver-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:22.503753 35395 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:22.904142 35395 pod_ready.go:92] pod "kube-controller-manager-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:22.904164 35395 pod_ready.go:81] duration metric: took 400.403865ms waiting for pod "kube-controller-manager-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:22.904174 35395 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-89rb5" in "kube-system" namespace to be "Ready" ...
I0307 18:43:23.303244 35395 pod_ready.go:92] pod "kube-proxy-89rb5" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:23.303263 35395 pod_ready.go:81] duration metric: took 399.083446ms waiting for pod "kube-proxy-89rb5" in "kube-system" namespace to be "Ready" ...
I0307 18:43:23.303278 35395 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:23.704268 35395 pod_ready.go:92] pod "kube-scheduler-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:23.704288 35395 pod_ready.go:81] duration metric: took 401.005104ms waiting for pod "kube-scheduler-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:23.704295 35395 pod_ready.go:38] duration metric: took 2.441458878s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 18:43:23.704311 35395 api_server.go:51] waiting for apiserver process to appear ...
I0307 18:43:23.704349 35395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:43:23.716827 35395 api_server.go:71] duration metric: took 2.702510753s to wait for apiserver process to appear ...
I0307 18:43:23.716856 35395 api_server.go:87] waiting for apiserver healthz status ...
I0307 18:43:23.716868 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:43:23.723857 35395 api_server.go:278] https://192.168.61.47:8443/healthz returned 200:
ok
I0307 18:43:23.724750 35395 api_server.go:140] control plane version: v1.26.2
I0307 18:43:23.724768 35395 api_server.go:130] duration metric: took 7.905622ms to wait for apiserver health ...
I0307 18:43:23.724778 35395 system_pods.go:43] waiting for kube-system pods to appear ...
I0307 18:43:23.906704 35395 system_pods.go:59] 6 kube-system pods found
I0307 18:43:23.906737 35395 system_pods.go:61] "coredns-787d4945fb-n77tj" [e63f9141-89ed-4e4d-b1aa-86ad76074f81] Running
I0307 18:43:23.906745 35395 system_pods.go:61] "etcd-pause-763583" [1443cb3f-e768-40cb-8959-b07a77a9b089] Running
I0307 18:43:23.906752 35395 system_pods.go:61] "kube-apiserver-pause-763583" [21663669-cac7-48c2-9107-e69979cee194] Running
I0307 18:43:23.906759 35395 system_pods.go:61] "kube-controller-manager-pause-763583" [e00cf98f-3435-4f3c-b91c-c00a0b794b06] Running
I0307 18:43:23.906766 35395 system_pods.go:61] "kube-proxy-89rb5" [1976b181-14ab-48a2-bb64-2eb3b1ecf436] Running
I0307 18:43:23.906773 35395 system_pods.go:61] "kube-scheduler-pause-763583" [b495d084-0581-4e14-917f-e44a0bf077df] Running
I0307 18:43:23.906785 35395 system_pods.go:74] duration metric: took 182.000313ms to wait for pod list to return data ...
I0307 18:43:23.906799 35395 default_sa.go:34] waiting for default service account to be created ...
I0307 18:43:24.103036 35395 default_sa.go:45] found service account: "default"
I0307 18:43:24.103058 35395 default_sa.go:55] duration metric: took 196.253509ms for default service account to be created ...
I0307 18:43:24.103066 35395 system_pods.go:116] waiting for k8s-apps to be running ...
I0307 18:43:24.305992 35395 system_pods.go:86] 6 kube-system pods found
I0307 18:43:24.306020 35395 system_pods.go:89] "coredns-787d4945fb-n77tj" [e63f9141-89ed-4e4d-b1aa-86ad76074f81] Running
I0307 18:43:24.306025 35395 system_pods.go:89] "etcd-pause-763583" [1443cb3f-e768-40cb-8959-b07a77a9b089] Running
I0307 18:43:24.306029 35395 system_pods.go:89] "kube-apiserver-pause-763583" [21663669-cac7-48c2-9107-e69979cee194] Running
I0307 18:43:24.306033 35395 system_pods.go:89] "kube-controller-manager-pause-763583" [e00cf98f-3435-4f3c-b91c-c00a0b794b06] Running
I0307 18:43:24.306038 35395 system_pods.go:89] "kube-proxy-89rb5" [1976b181-14ab-48a2-bb64-2eb3b1ecf436] Running
I0307 18:43:24.306042 35395 system_pods.go:89] "kube-scheduler-pause-763583" [b495d084-0581-4e14-917f-e44a0bf077df] Running
I0307 18:43:24.306050 35395 system_pods.go:126] duration metric: took 202.978873ms to wait for k8s-apps to be running ...
I0307 18:43:24.306059 35395 system_svc.go:44] waiting for kubelet service to be running ....
I0307 18:43:24.306109 35395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0307 18:43:24.321158 35395 system_svc.go:56] duration metric: took 15.087082ms WaitForService to wait for kubelet.
I0307 18:43:24.321187 35395 kubeadm.go:578] duration metric: took 3.306875345s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0307 18:43:24.321209 35395 node_conditions.go:102] verifying NodePressure condition ...
I0307 18:43:24.505220 35395 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 18:43:24.505243 35395 node_conditions.go:123] node cpu capacity is 2
I0307 18:43:24.505252 35395 node_conditions.go:105] duration metric: took 184.038448ms to run NodePressure ...
I0307 18:43:24.505262 35395 start.go:228] waiting for startup goroutines ...
I0307 18:43:24.505268 35395 start.go:233] waiting for cluster config update ...
I0307 18:43:24.505274 35395 start.go:242] writing updated cluster config ...
I0307 18:43:24.505561 35395 ssh_runner.go:195] Run: rm -f paused
I0307 18:43:24.559116 35395 start.go:555] kubectl: 1.26.2, cluster: 1.26.2 (minor skew: 0)
I0307 18:43:24.561247 35395 out.go:177] * Done! kubectl is now configured to use "pause-763583" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Journal begins at Tue 2023-03-07 18:40:45 UTC, ends at Tue 2023-03-07 18:43:25 UTC. --
Mar 07 18:42:59 pause-763583 dockerd[4819]: time="2023-03-07T18:42:59.192757297Z" level=warning msg="cleanup warnings time=\"2023-03-07T18:42:59Z\" level=info msg=\"starting signal loop\" namespace=moby pid=7172 runtime=io.containerd.runc.v2\n"
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.718500300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.718580136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.718594747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.718982694Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/15dec3a5065b9255570a391fe7f4609698cee47f222cdd9bff9cecd408da96c8 pid=7424 runtime=io.containerd.runc.v2
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.724329662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.724400358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.724413596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.724910735Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/eef128330f346ca57cefd50e56f7cbce02bd3f4611a308589687632ba40a8600 pid=7433 runtime=io.containerd.runc.v2
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.725542032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.725630222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.725645855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.726324109Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c27e397326f6378658183d75811890287d946bda3ad9346b87533febea041cd0 pid=7431 runtime=io.containerd.runc.v2
Mar 07 18:43:10 pause-763583 dockerd[4819]: time="2023-03-07T18:43:10.245231597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 07 18:43:10 pause-763583 dockerd[4819]: time="2023-03-07T18:43:10.245387123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 07 18:43:10 pause-763583 dockerd[4819]: time="2023-03-07T18:43:10.245405130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 07 18:43:10 pause-763583 dockerd[4819]: time="2023-03-07T18:43:10.246139895Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8ef158124e5c02451279de07c2a084c4f41fae664112881c1c1c8a56f19a9872 pid=7658 runtime=io.containerd.runc.v2
Mar 07 18:43:10 pause-763583 dockerd[4819]: time="2023-03-07T18:43:10.590977589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 07 18:43:10 pause-763583 dockerd[4819]: time="2023-03-07T18:43:10.591026808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 07 18:43:10 pause-763583 dockerd[4819]: time="2023-03-07T18:43:10.591036828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 07 18:43:10 pause-763583 dockerd[4819]: time="2023-03-07T18:43:10.591478968Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ef4bf961fbfdd619f6bcecbaa87c34e145149737820f1c46458cc1bb3422732e pid=7705 runtime=io.containerd.runc.v2
Mar 07 18:43:11 pause-763583 dockerd[4819]: time="2023-03-07T18:43:11.471425959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 07 18:43:11 pause-763583 dockerd[4819]: time="2023-03-07T18:43:11.471497895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 07 18:43:11 pause-763583 dockerd[4819]: time="2023-03-07T18:43:11.471512951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 07 18:43:11 pause-763583 dockerd[4819]: time="2023-03-07T18:43:11.472425862Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f1b9cfc7922e6981f0557d4d54467dc5c8ce1c88fbb2f9d4046cadc922f9e726 pid=7938 runtime=io.containerd.runc.v2
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
f1b9cfc7922e6 5185b96f0becf 14 seconds ago Running coredns 2 8ef158124e5c0
ef4bf961fbfdd 6f64e7135a6ec 15 seconds ago Running kube-proxy 3 fe3ddcbb103c2
c27e397326f63 240e201d5b0d8 24 seconds ago Running kube-controller-manager 3 50f7a2d79d848
eef128330f346 fce326961ae2d 24 seconds ago Running etcd 3 345089ed33275
15dec3a5065b9 db8f409d9a5d7 24 seconds ago Running kube-scheduler 3 309302fffd507
5165906d51912 63d3239c3c159 29 seconds ago Running kube-apiserver 2 59dd8031423e4
94878c02897cd 240e201d5b0d8 40 seconds ago Exited kube-controller-manager 2 2bd1468d967e5
6e5a6ab1db374 fce326961ae2d 43 seconds ago Exited etcd 2 764a0fa4725b8
ada79eb25afea 6f64e7135a6ec 43 seconds ago Exited kube-proxy 2 4b088e44e1281
807b657d81c5a db8f409d9a5d7 54 seconds ago Exited kube-scheduler 2 373afa3584ae4
c6e309b2a1410 5185b96f0becf About a minute ago Exited coredns 1 1aa5eca48ed3d
323901da5efdc 63d3239c3c159 About a minute ago Exited kube-apiserver 1 0a00ef3151aa2
*
* ==> coredns [c6e309b2a141] <==
* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] 127.0.0.1:38272 - 10409 "HINFO IN 953780146248982216.8378533669120952003. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020585193s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
*
* ==> coredns [f1b9cfc7922e] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] 127.0.0.1:45070 - 47542 "HINFO IN 5933985263124533339.8025083633949818112. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032920062s
*
* ==> describe nodes <==
* Name: pause-763583
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-763583
kubernetes.io/os=linux
minikube.k8s.io/commit=592b1e9939a898d806f69aad174a19c45f317df1
minikube.k8s.io/name=pause-763583
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_03_07T18_41_29_0700
minikube.k8s.io/version=v1.29.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 07 Mar 2023 18:41:25 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-763583
AcquireTime: <unset>
RenewTime: Tue, 07 Mar 2023 18:43:18 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 07 Mar 2023 18:43:08 +0000 Tue, 07 Mar 2023 18:41:21 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 07 Mar 2023 18:43:08 +0000 Tue, 07 Mar 2023 18:41:21 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 07 Mar 2023 18:43:08 +0000 Tue, 07 Mar 2023 18:41:21 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 07 Mar 2023 18:43:08 +0000 Tue, 07 Mar 2023 18:41:30 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.61.47
Hostname: pause-763583
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017420Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017420Ki
pods: 110
System Info:
Machine ID: 96f2b5e4734a42f69e84fd4020108855
System UUID: 96f2b5e4-734a-42f6-9e84-fd4020108855
Boot ID: d94b0328-5b2d-4150-b356-9094c7a09c6e
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.23
Kubelet Version: v1.26.2
Kube-Proxy Version: v1.26.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-787d4945fb-n77tj 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 103s
kube-system etcd-pause-763583 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 116s
kube-system kube-apiserver-pause-763583 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 116s
kube-system kube-controller-manager-pause-763583 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 119s
kube-system kube-proxy-89rb5 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 104s
kube-system kube-scheduler-pause-763583 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 119s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 101s kube-proxy
Normal Starting 14s kube-proxy
Normal Starting 67s kube-proxy
Normal NodeHasSufficientPID 2m10s (x5 over 2m10s) kubelet Node pause-763583 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 2m10s (x5 over 2m10s) kubelet Node pause-763583 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 2m10s (x6 over 2m10s) kubelet Node pause-763583 status is now: NodeHasSufficientMemory
Normal Starting 116s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 116s kubelet Node pause-763583 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 116s kubelet Node pause-763583 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 116s kubelet Node pause-763583 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 116s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 115s kubelet Node pause-763583 status is now: NodeReady
Normal RegisteredNode 104s node-controller Node pause-763583 event: Registered Node pause-763583 in Controller
Normal Starting 25s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 24s (x8 over 24s) kubelet Node pause-763583 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 24s (x8 over 24s) kubelet Node pause-763583 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 24s (x7 over 24s) kubelet Node pause-763583 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 24s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 5s node-controller Node pause-763583 event: Registered Node pause-763583 in Controller
*
* ==> dmesg <==
* [ +0.357721] systemd-fstab-generator[894]: Ignoring "noauto" for root device
[ +0.250125] systemd-fstab-generator[931]: Ignoring "noauto" for root device
[ +0.127377] systemd-fstab-generator[942]: Ignoring "noauto" for root device
[ +0.130005] systemd-fstab-generator[955]: Ignoring "noauto" for root device
[ +1.487189] systemd-fstab-generator[1103]: Ignoring "noauto" for root device
[ +0.116407] systemd-fstab-generator[1114]: Ignoring "noauto" for root device
[ +0.106234] systemd-fstab-generator[1125]: Ignoring "noauto" for root device
[ +0.119898] systemd-fstab-generator[1136]: Ignoring "noauto" for root device
[ +4.465694] systemd-fstab-generator[1385]: Ignoring "noauto" for root device
[ +0.661318] kauditd_printk_skb: 68 callbacks suppressed
[ +13.201981] systemd-fstab-generator[2399]: Ignoring "noauto" for root device
[ +15.296940] kauditd_printk_skb: 8 callbacks suppressed
[ +6.493699] kauditd_printk_skb: 26 callbacks suppressed
[Mar 7 18:42] systemd-fstab-generator[3884]: Ignoring "noauto" for root device
[ +0.261566] systemd-fstab-generator[3915]: Ignoring "noauto" for root device
[ +0.137940] systemd-fstab-generator[3926]: Ignoring "noauto" for root device
[ +0.162289] systemd-fstab-generator[3939]: Ignoring "noauto" for root device
[ +1.297927] kauditd_printk_skb: 2 callbacks suppressed
[ +11.568969] systemd-fstab-generator[5238]: Ignoring "noauto" for root device
[ +0.133230] systemd-fstab-generator[5254]: Ignoring "noauto" for root device
[ +0.170384] systemd-fstab-generator[5309]: Ignoring "noauto" for root device
[ +0.203344] systemd-fstab-generator[5363]: Ignoring "noauto" for root device
[ +1.382359] kauditd_printk_skb: 32 callbacks suppressed
[ +5.276600] kauditd_printk_skb: 3 callbacks suppressed
[Mar 7 18:43] systemd-fstab-generator[7254]: Ignoring "noauto" for root device
*
* ==> etcd [6e5a6ab1db37] <==
* {"level":"info","ts":"2023-03-07T18:42:42.952Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-03-07T18:42:42.952Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.61.47:2380"}
{"level":"info","ts":"2023-03-07T18:42:42.953Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.61.47:2380"}
{"level":"info","ts":"2023-03-07T18:42:42.953Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"bbb11fcb00d21a09","initial-advertise-peer-urls":["https://192.168.61.47:2380"],"listen-peer-urls":["https://192.168.61.47:2380"],"advertise-client-urls":["https://192.168.61.47:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.47:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-03-07T18:42:42.953Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-03-07T18:42:44.735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 is starting a new election at term 3"}
{"level":"info","ts":"2023-03-07T18:42:44.735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 became pre-candidate at term 3"}
{"level":"info","ts":"2023-03-07T18:42:44.735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 received MsgPreVoteResp from bbb11fcb00d21a09 at term 3"}
{"level":"info","ts":"2023-03-07T18:42:44.735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 became candidate at term 4"}
{"level":"info","ts":"2023-03-07T18:42:44.735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 received MsgVoteResp from bbb11fcb00d21a09 at term 4"}
{"level":"info","ts":"2023-03-07T18:42:44.735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 became leader at term 4"}
{"level":"info","ts":"2023-03-07T18:42:44.735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bbb11fcb00d21a09 elected leader bbb11fcb00d21a09 at term 4"}
{"level":"info","ts":"2023-03-07T18:42:44.741Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"bbb11fcb00d21a09","local-member-attributes":"{Name:pause-763583 ClientURLs:[https://192.168.61.47:2379]}","request-path":"/0/members/bbb11fcb00d21a09/attributes","cluster-id":"d13a567fb8903787","publish-timeout":"7s"}
{"level":"info","ts":"2023-03-07T18:42:44.741Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-07T18:42:44.741Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-03-07T18:42:44.741Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-03-07T18:42:44.741Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-07T18:42:44.742Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-03-07T18:42:44.742Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.61.47:2379"}
{"level":"info","ts":"2023-03-07T18:42:54.173Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2023-03-07T18:42:54.173Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-763583","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.47:2380"],"advertise-client-urls":["https://192.168.61.47:2379"]}
{"level":"info","ts":"2023-03-07T18:42:54.179Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"bbb11fcb00d21a09","current-leader-member-id":"bbb11fcb00d21a09"}
{"level":"info","ts":"2023-03-07T18:42:54.183Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.61.47:2380"}
{"level":"info","ts":"2023-03-07T18:42:54.185Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.61.47:2380"}
{"level":"info","ts":"2023-03-07T18:42:54.185Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-763583","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.47:2380"],"advertise-client-urls":["https://192.168.61.47:2379"]}
*
* ==> etcd [eef128330f34] <==
* {"level":"info","ts":"2023-03-07T18:43:02.868Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2023-03-07T18:43:02.868Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2023-03-07T18:43:02.869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 switched to configuration voters=(13524626112722901513)"}
{"level":"info","ts":"2023-03-07T18:43:02.869Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d13a567fb8903787","local-member-id":"bbb11fcb00d21a09","added-peer-id":"bbb11fcb00d21a09","added-peer-peer-urls":["https://192.168.61.47:2380"]}
{"level":"info","ts":"2023-03-07T18:43:02.870Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d13a567fb8903787","local-member-id":"bbb11fcb00d21a09","cluster-version":"3.5"}
{"level":"info","ts":"2023-03-07T18:43:02.870Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-03-07T18:43:02.877Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-03-07T18:43:02.878Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"bbb11fcb00d21a09","initial-advertise-peer-urls":["https://192.168.61.47:2380"],"listen-peer-urls":["https://192.168.61.47:2380"],"advertise-client-urls":["https://192.168.61.47:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.47:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-03-07T18:43:02.878Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-03-07T18:43:02.878Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.61.47:2380"}
{"level":"info","ts":"2023-03-07T18:43:02.878Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.61.47:2380"}
{"level":"info","ts":"2023-03-07T18:43:03.811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 is starting a new election at term 4"}
{"level":"info","ts":"2023-03-07T18:43:03.811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 became pre-candidate at term 4"}
{"level":"info","ts":"2023-03-07T18:43:03.811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 received MsgPreVoteResp from bbb11fcb00d21a09 at term 4"}
{"level":"info","ts":"2023-03-07T18:43:03.811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 became candidate at term 5"}
{"level":"info","ts":"2023-03-07T18:43:03.811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 received MsgVoteResp from bbb11fcb00d21a09 at term 5"}
{"level":"info","ts":"2023-03-07T18:43:03.811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 became leader at term 5"}
{"level":"info","ts":"2023-03-07T18:43:03.811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bbb11fcb00d21a09 elected leader bbb11fcb00d21a09 at term 5"}
{"level":"info","ts":"2023-03-07T18:43:03.820Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"bbb11fcb00d21a09","local-member-attributes":"{Name:pause-763583 ClientURLs:[https://192.168.61.47:2379]}","request-path":"/0/members/bbb11fcb00d21a09/attributes","cluster-id":"d13a567fb8903787","publish-timeout":"7s"}
{"level":"info","ts":"2023-03-07T18:43:03.820Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-07T18:43:03.822Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-03-07T18:43:03.823Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-07T18:43:03.824Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.61.47:2379"}
{"level":"info","ts":"2023-03-07T18:43:03.834Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-03-07T18:43:03.834Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
*
* ==> kernel <==
* 18:43:25 up 2 min, 0 users, load average: 1.29, 0.64, 0.25
Linux pause-763583 5.10.57 #1 SMP Fri Feb 24 23:00:41 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [323901da5efd] <==
* W0307 18:42:36.521954 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0307 18:42:40.161722 1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0307 18:42:42.409323 1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
E0307 18:42:46.204942 1 run.go:74] "command failed" err="context deadline exceeded"
*
* ==> kube-apiserver [5165906d5191] <==
* I0307 18:43:08.310421 1 establishing_controller.go:76] Starting EstablishingController
I0307 18:43:08.310769 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0307 18:43:08.311116 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0307 18:43:08.311363 1 crd_finalizer.go:266] Starting CRDFinalizer
I0307 18:43:08.388060 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0307 18:43:08.388105 1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
I0307 18:43:08.538926 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0307 18:43:08.545559 1 shared_informer.go:280] Caches are synced for node_authorizer
I0307 18:43:08.588540 1 shared_informer.go:280] Caches are synced for crd-autoregister
I0307 18:43:08.588756 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0307 18:43:08.589259 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0307 18:43:08.590662 1 apf_controller.go:366] Running API Priority and Fairness config worker
I0307 18:43:08.590761 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
I0307 18:43:08.592785 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
I0307 18:43:08.595945 1 shared_informer.go:280] Caches are synced for configmaps
I0307 18:43:08.600179 1 cache.go:39] Caches are synced for autoregister controller
I0307 18:43:08.916211 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0307 18:43:09.302398 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0307 18:43:10.111176 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0307 18:43:10.153214 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0307 18:43:10.237420 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0307 18:43:10.310949 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0307 18:43:10.338435 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0307 18:43:20.873484 1 controller.go:615] quota admission added evaluator for: endpoints
I0307 18:43:20.904943 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
*
* ==> kube-controller-manager [94878c02897c] <==
* I0307 18:42:46.458769 1 serving.go:348] Generated self-signed cert in-memory
I0307 18:42:46.783654 1 controllermanager.go:182] Version: v1.26.2
I0307 18:42:46.783860 1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0307 18:42:46.785151 1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0307 18:42:46.785243 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0307 18:42:46.785176 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0307 18:42:46.785484 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
*
* ==> kube-controller-manager [c27e397326f6] <==
* I0307 18:43:20.873857 1 shared_informer.go:280] Caches are synced for cronjob
I0307 18:43:20.878314 1 shared_informer.go:280] Caches are synced for bootstrap_signer
I0307 18:43:20.882146 1 shared_informer.go:280] Caches are synced for expand
I0307 18:43:20.885718 1 shared_informer.go:280] Caches are synced for ephemeral
I0307 18:43:20.886037 1 shared_informer.go:280] Caches are synced for attach detach
I0307 18:43:20.888299 1 shared_informer.go:280] Caches are synced for HPA
I0307 18:43:20.888582 1 shared_informer.go:280] Caches are synced for PV protection
I0307 18:43:20.892743 1 shared_informer.go:280] Caches are synced for TTL
I0307 18:43:20.897939 1 shared_informer.go:280] Caches are synced for taint
I0307 18:43:20.898413 1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone:
W0307 18:43:20.898804 1 node_lifecycle_controller.go:1053] Missing timestamp for Node pause-763583. Assuming now as a timestamp.
I0307 18:43:20.899043 1 node_lifecycle_controller.go:1254] Controller detected that zone is now in state Normal.
I0307 18:43:20.899481 1 taint_manager.go:206] "Starting NoExecuteTaintManager"
I0307 18:43:20.899747 1 taint_manager.go:211] "Sending events to api server"
I0307 18:43:20.900134 1 event.go:294] "Event occurred" object="pause-763583" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-763583 event: Registered Node pause-763583 in Controller"
I0307 18:43:20.903032 1 shared_informer.go:280] Caches are synced for persistent volume
I0307 18:43:20.915987 1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring
I0307 18:43:20.959796 1 shared_informer.go:280] Caches are synced for disruption
I0307 18:43:20.974716 1 shared_informer.go:280] Caches are synced for deployment
I0307 18:43:20.975623 1 shared_informer.go:280] Caches are synced for ReplicaSet
I0307 18:43:21.027112 1 shared_informer.go:280] Caches are synced for resource quota
I0307 18:43:21.097864 1 shared_informer.go:280] Caches are synced for resource quota
I0307 18:43:21.435270 1 shared_informer.go:280] Caches are synced for garbage collector
I0307 18:43:21.443253 1 shared_informer.go:280] Caches are synced for garbage collector
I0307 18:43:21.443274 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-proxy [ada79eb25afe] <==
* E0307 18:42:47.219325 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-763583": dial tcp 192.168.61.47:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.47:35132->192.168.61.47:8443: read: connection reset by peer
E0307 18:42:48.393244 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-763583": dial tcp 192.168.61.47:8443: connect: connection refused
E0307 18:42:50.438279 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-763583": dial tcp 192.168.61.47:8443: connect: connection refused
*
* ==> kube-proxy [ef4bf961fbfd] <==
* I0307 18:43:10.797523 1 node.go:163] Successfully retrieved node IP: 192.168.61.47
I0307 18:43:10.797896 1 server_others.go:109] "Detected node IP" address="192.168.61.47"
I0307 18:43:10.798039 1 server_others.go:535] "Using iptables proxy"
I0307 18:43:10.849939 1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0307 18:43:10.849958 1 server_others.go:176] "Using iptables Proxier"
I0307 18:43:10.850013 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0307 18:43:10.850263 1 server.go:655] "Version info" version="v1.26.2"
I0307 18:43:10.850271 1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0307 18:43:10.852006 1 config.go:317] "Starting service config controller"
I0307 18:43:10.852017 1 shared_informer.go:273] Waiting for caches to sync for service config
I0307 18:43:10.852036 1 config.go:226] "Starting endpoint slice config controller"
I0307 18:43:10.852039 1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
I0307 18:43:10.852391 1 config.go:444] "Starting node config controller"
I0307 18:43:10.852397 1 shared_informer.go:273] Waiting for caches to sync for node config
I0307 18:43:10.952964 1 shared_informer.go:280] Caches are synced for node config
I0307 18:43:10.953166 1 shared_informer.go:280] Caches are synced for endpoint slice config
I0307 18:43:10.953191 1 shared_informer.go:280] Caches are synced for service config
*
* ==> kube-scheduler [15dec3a5065b] <==
* W0307 18:43:08.507261 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0307 18:43:08.507326 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0307 18:43:08.512853 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0307 18:43:08.513008 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0307 18:43:08.513306 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0307 18:43:08.513504 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0307 18:43:08.513908 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0307 18:43:08.515006 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0307 18:43:08.515568 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0307 18:43:08.515621 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0307 18:43:08.516261 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0307 18:43:08.516317 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0307 18:43:08.516592 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0307 18:43:08.516640 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0307 18:43:08.517124 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0307 18:43:08.517191 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0307 18:43:08.517501 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0307 18:43:08.517560 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0307 18:43:08.517822 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0307 18:43:08.517862 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0307 18:43:08.518027 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0307 18:43:08.518205 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0307 18:43:08.521904 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0307 18:43:08.522043 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
I0307 18:43:09.589200 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [807b657d81c5] <==
* W0307 18:42:51.071495 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.61.47:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0307 18:42:51.071539 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.61.47:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
W0307 18:42:51.104237 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0307 18:42:51.104278 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
W0307 18:42:51.192505 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0307 18:42:51.192580 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
W0307 18:42:51.250181 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0307 18:42:51.250225 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
W0307 18:42:51.381925 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.61.47:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0307 18:42:51.381970 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.61.47:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
W0307 18:42:51.462136 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.61.47:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0307 18:42:51.462172 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.61.47:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
W0307 18:42:51.562118 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.61.47:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0307 18:42:51.562171 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.61.47:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
W0307 18:42:51.637946 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.61.47:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0307 18:42:51.638063 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.61.47:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
W0307 18:42:51.694375 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.61.47:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0307 18:42:51.694442 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.61.47:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
W0307 18:42:54.148269 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.61.47:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0307 18:42:54.148330 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.61.47:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
I0307 18:42:54.194237 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
I0307 18:42:54.194515 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0307 18:42:54.194750 1 shared_informer.go:276] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0307 18:42:54.194764 1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
E0307 18:42:54.195288 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kubelet <==
* -- Journal begins at Tue 2023-03-07 18:40:45 UTC, ends at Tue 2023-03-07 18:43:26 UTC. --
Mar 07 18:43:01 pause-763583 kubelet[7260]: I0307 18:43:01.355102 7260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c98952be90afd76e358cf45d199ab1e-k8s-certs\") pod \"kube-controller-manager-pause-763583\" (UID: \"8c98952be90afd76e358cf45d199ab1e\") " pod="kube-system/kube-controller-manager-pause-763583"
Mar 07 18:43:01 pause-763583 kubelet[7260]: I0307 18:43:01.355150 7260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c98952be90afd76e358cf45d199ab1e-kubeconfig\") pod \"kube-controller-manager-pause-763583\" (UID: \"8c98952be90afd76e358cf45d199ab1e\") " pod="kube-system/kube-controller-manager-pause-763583"
Mar 07 18:43:01 pause-763583 kubelet[7260]: I0307 18:43:01.355215 7260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c98952be90afd76e358cf45d199ab1e-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-763583\" (UID: \"8c98952be90afd76e358cf45d199ab1e\") " pod="kube-system/kube-controller-manager-pause-763583"
Mar 07 18:43:01 pause-763583 kubelet[7260]: I0307 18:43:01.355269 7260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c98952be90afd76e358cf45d199ab1e-ca-certs\") pod \"kube-controller-manager-pause-763583\" (UID: \"8c98952be90afd76e358cf45d199ab1e\") " pod="kube-system/kube-controller-manager-pause-763583"
Mar 07 18:43:01 pause-763583 kubelet[7260]: I0307 18:43:01.543827 7260 scope.go:115] "RemoveContainer" containerID="807b657d81c5ae3073c6f68f516057e0eae61acc433516c80f3bb9012955718d"
Mar 07 18:43:01 pause-763583 kubelet[7260]: I0307 18:43:01.557775 7260 scope.go:115] "RemoveContainer" containerID="6e5a6ab1db37433428780215d9fa2f4e75c85f05e012cda4b5b5aeb1eb7a2ec9"
Mar 07 18:43:01 pause-763583 kubelet[7260]: I0307 18:43:01.592111 7260 scope.go:115] "RemoveContainer" containerID="94878c02897cd0b600b698a111410b78a3316213e64110ed9311fa5516a61a2a"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.630105 7260 kubelet_node_status.go:108] "Node was previously registered" node="pause-763583"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.631113 7260 kubelet_node_status.go:73] "Successfully registered node" node="pause-763583"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.634873 7260 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.636591 7260 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.828530 7260 apiserver.go:52] "Watching apiserver"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.832912 7260 topology_manager.go:210] "Topology Admit Handler"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.833153 7260 topology_manager.go:210] "Topology Admit Handler"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.848920 7260 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.918995 7260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdxkx\" (UniqueName: \"kubernetes.io/projected/1976b181-14ab-48a2-bb64-2eb3b1ecf436-kube-api-access-wdxkx\") pod \"kube-proxy-89rb5\" (UID: \"1976b181-14ab-48a2-bb64-2eb3b1ecf436\") " pod="kube-system/kube-proxy-89rb5"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.919768 7260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1976b181-14ab-48a2-bb64-2eb3b1ecf436-kube-proxy\") pod \"kube-proxy-89rb5\" (UID: \"1976b181-14ab-48a2-bb64-2eb3b1ecf436\") " pod="kube-system/kube-proxy-89rb5"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.920110 7260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1976b181-14ab-48a2-bb64-2eb3b1ecf436-lib-modules\") pod \"kube-proxy-89rb5\" (UID: \"1976b181-14ab-48a2-bb64-2eb3b1ecf436\") " pod="kube-system/kube-proxy-89rb5"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.920464 7260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e63f9141-89ed-4e4d-b1aa-86ad76074f81-config-volume\") pod \"coredns-787d4945fb-n77tj\" (UID: \"e63f9141-89ed-4e4d-b1aa-86ad76074f81\") " pod="kube-system/coredns-787d4945fb-n77tj"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.920777 7260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1976b181-14ab-48a2-bb64-2eb3b1ecf436-xtables-lock\") pod \"kube-proxy-89rb5\" (UID: \"1976b181-14ab-48a2-bb64-2eb3b1ecf436\") " pod="kube-system/kube-proxy-89rb5"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.921139 7260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nw72\" (UniqueName: \"kubernetes.io/projected/e63f9141-89ed-4e4d-b1aa-86ad76074f81-kube-api-access-2nw72\") pod \"coredns-787d4945fb-n77tj\" (UID: \"e63f9141-89ed-4e4d-b1aa-86ad76074f81\") " pod="kube-system/coredns-787d4945fb-n77tj"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.921275 7260 reconciler.go:41] "Reconciler: start to sync state"
Mar 07 18:43:10 pause-763583 kubelet[7260]: I0307 18:43:10.049082 7260 request.go:690] Waited for 1.021929352s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token
Mar 07 18:43:10 pause-763583 kubelet[7260]: I0307 18:43:10.339903 7260 scope.go:115] "RemoveContainer" containerID="ada79eb25afeafa814e89c049a7d167866ebe9d2b5feba46d73d8463af7416fb"
Mar 07 18:43:11 pause-763583 kubelet[7260]: I0307 18:43:11.289083 7260 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ef158124e5c02451279de07c2a084c4f41fae664112881c1c1c8a56f19a9872"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-763583 -n pause-763583
helpers_test.go:261: (dbg) Run: kubectl --context pause-763583 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-763583 -n pause-763583
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p pause-763583 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-763583 logs -n 25: (1.301317193s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
| ssh | -p cilium-114236 sudo iptables | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | -t nat -L -n -v | | | | | |
| ssh | -p cilium-114236 sudo | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | systemctl status kubelet --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p cilium-114236 sudo | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | systemctl cat kubelet | | | | | |
| | --no-pager | | | | | |
| ssh | -p cilium-114236 sudo | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | journalctl -xeu kubelet --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p cilium-114236 sudo cat | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | /etc/kubernetes/kubelet.conf | | | | | |
| ssh | -p cilium-114236 sudo cat | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | /var/lib/kubelet/config.yaml | | | | | |
| ssh | -p cilium-114236 sudo | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | systemctl status docker --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p cilium-114236 sudo | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | systemctl cat docker | | | | | |
| | --no-pager | | | | | |
| ssh | -p cilium-114236 sudo cat | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | /etc/docker/daemon.json | | | | | |
| ssh | -p cilium-114236 sudo docker | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | system info | | | | | |
| ssh | -p cilium-114236 sudo | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | systemctl status cri-docker | | | | | |
| | --all --full --no-pager | | | | | |
| ssh | -p cilium-114236 sudo | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | systemctl cat cri-docker | | | | | |
| | --no-pager | | | | | |
| ssh | -p cilium-114236 sudo cat | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | /etc/systemd/system/cri-docker.service.d/10-cni.conf | | | | | |
| ssh | -p cilium-114236 sudo cat | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | /usr/lib/systemd/system/cri-docker.service | | | | | |
| ssh | -p cilium-114236 sudo | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | cri-dockerd --version | | | | | |
| ssh | -p cilium-114236 sudo | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | systemctl status containerd | | | | | |
| | --all --full --no-pager | | | | | |
| ssh | -p cilium-114236 sudo | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | systemctl cat containerd | | | | | |
| | --no-pager | | | | | |
| ssh | -p cilium-114236 sudo cat | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | /lib/systemd/system/containerd.service | | | | | |
| ssh | -p cilium-114236 sudo cat | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | /etc/containerd/config.toml | | | | | |
| ssh | -p cilium-114236 sudo | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | containerd config dump | | | | | |
| ssh | -p cilium-114236 sudo | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | systemctl status crio --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p cilium-114236 sudo | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | systemctl cat crio --no-pager | | | | | |
| ssh | -p cilium-114236 sudo find | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | /etc/crio -type f -exec sh -c | | | | | |
| | 'echo {}; cat {}' \; | | | | | |
| ssh | -p cilium-114236 sudo crio | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| | config | | | | | |
| delete | -p cilium-114236 | cilium-114236 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | 07 Mar 23 18:43 UTC |
|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/03/07 18:43:05
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.20.1 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0307 18:43:05.371585 35824 out.go:296] Setting OutFile to fd 1 ...
I0307 18:43:05.371756 35824 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0307 18:43:05.371759 35824 out.go:309] Setting ErrFile to fd 2...
I0307 18:43:05.371763 35824 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0307 18:43:05.371866 35824 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15985-4059/.minikube/bin
I0307 18:43:05.372454 35824 out.go:303] Setting JSON to false
I0307 18:43:05.373403 35824 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5138,"bootTime":1678209448,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0307 18:43:05.373476 35824 start.go:135] virtualization: kvm guest
I0307 18:43:05.376845 35824 out.go:177] * [NoKubernetes-015933] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0307 18:43:05.378361 35824 out.go:177] - MINIKUBE_LOCATION=15985
I0307 18:43:05.378425 35824 notify.go:220] Checking for updates...
I0307 18:43:05.379782 35824 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0307 18:43:05.381143 35824 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15985-4059/kubeconfig
I0307 18:43:05.382526 35824 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-4059/.minikube
I0307 18:43:05.383873 35824 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0307 18:43:05.385287 35824 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0307 18:43:05.387069 35824 config.go:182] Loaded profile config "NoKubernetes-015933": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
I0307 18:43:05.387659 35824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0307 18:43:05.387714 35824 main.go:141] libmachine: Launching plugin server for driver kvm2
I0307 18:43:05.407372 35824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34093
I0307 18:43:05.407784 35824 main.go:141] libmachine: () Calling .GetVersion
I0307 18:43:05.408457 35824 main.go:141] libmachine: Using API Version 1
I0307 18:43:05.408478 35824 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 18:43:05.408795 35824 main.go:141] libmachine: () Calling .GetMachineName
I0307 18:43:05.408996 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .DriverName
I0307 18:43:05.409176 35824 start.go:1652] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
I0307 18:43:05.409213 35824 driver.go:365] Setting default libvirt URI to qemu:///system
I0307 18:43:05.409657 35824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0307 18:43:05.409696 35824 main.go:141] libmachine: Launching plugin server for driver kvm2
I0307 18:43:05.424475 35824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35675
I0307 18:43:05.424900 35824 main.go:141] libmachine: () Calling .GetVersion
I0307 18:43:05.425494 35824 main.go:141] libmachine: Using API Version 1
I0307 18:43:05.425520 35824 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 18:43:05.425845 35824 main.go:141] libmachine: () Calling .GetMachineName
I0307 18:43:05.426029 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .DriverName
I0307 18:43:05.464875 35824 out.go:177] * Using the kvm2 driver based on existing profile
I0307 18:43:05.466733 35824 start.go:296] selected driver: kvm2
I0307 18:43:05.466741 35824 start.go:857] validating driver "kvm2" against &{Name:NoKubernetes-015933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-015933 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.31 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0307 18:43:05.466877 35824 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0307 18:43:05.467160 35824 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 18:43:05.467237 35824 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15985-4059/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0307 18:43:05.483029 35824 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
I0307 18:43:05.484097 35824 cni.go:84] Creating CNI manager for ""
I0307 18:43:05.484123 35824 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0307 18:43:05.484134 35824 start_flags.go:319] config:
{Name:NoKubernetes-015933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-015933 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.31 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0307 18:43:05.484298 35824 iso.go:125] acquiring lock: {Name:mkf75c329a61b8189e3f3e4bd561d5125dafa20c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 18:43:05.486734 35824 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-015933
I0307 18:43:05.488359 35824 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime docker
W0307 18:43:05.518932 35824 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-docker-overlay2-amd64.tar.lz4 status code: 404
I0307 18:43:05.519140 35824 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/NoKubernetes-015933/config.json ...
I0307 18:43:05.519434 35824 cache.go:193] Successfully downloaded all kic artifacts
I0307 18:43:05.519476 35824 start.go:364] acquiring machines lock for NoKubernetes-015933: {Name:mkdc620a3744ce597744f8ea42dba23b3f56e106 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0307 18:43:05.519550 35824 start.go:368] acquired machines lock for "NoKubernetes-015933" in 47.24µs
I0307 18:43:05.519567 35824 start.go:96] Skipping create...Using existing machine configuration
I0307 18:43:05.519572 35824 fix.go:55] fixHost starting:
I0307 18:43:05.519930 35824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0307 18:43:05.519977 35824 main.go:141] libmachine: Launching plugin server for driver kvm2
I0307 18:43:05.534522 35824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42689
I0307 18:43:05.535027 35824 main.go:141] libmachine: () Calling .GetVersion
I0307 18:43:05.535599 35824 main.go:141] libmachine: Using API Version 1
I0307 18:43:05.535613 35824 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 18:43:05.535938 35824 main.go:141] libmachine: () Calling .GetMachineName
I0307 18:43:05.536115 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .DriverName
I0307 18:43:05.536284 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetState
I0307 18:43:05.538277 35824 fix.go:103] recreateIfNeeded on NoKubernetes-015933: state=Stopped err=<nil>
I0307 18:43:05.538297 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .DriverName
W0307 18:43:05.538459 35824 fix.go:129] unexpected machine state, will restart: <nil>
I0307 18:43:05.540765 35824 out.go:177] * Restarting existing kvm2 VM for "NoKubernetes-015933" ...
I0307 18:43:06.001163 34732 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/gvisor-addon_2: (5.969995878s)
I0307 18:43:06.001181 34732 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15985-4059/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 from cache
I0307 18:43:06.001209 34732 cache_images.go:123] Successfully loaded all cached images
I0307 18:43:06.001218 34732 cache_images.go:92] LoadImages completed in 7.201299799s
I0307 18:43:06.001224 34732 cache_images.go:262] succeeded pushing to: gvisor-579626
I0307 18:43:06.001228 34732 cache_images.go:263] failed pushing to:
I0307 18:43:06.001250 34732 main.go:141] libmachine: Making call to close driver server
I0307 18:43:06.001260 34732 main.go:141] libmachine: (gvisor-579626) Calling .Close
I0307 18:43:06.001545 34732 main.go:141] libmachine: Successfully made call to close driver server
I0307 18:43:06.001566 34732 main.go:141] libmachine: Making call to close connection to plugin binary
I0307 18:43:06.001577 34732 main.go:141] libmachine: Making call to close driver server
I0307 18:43:06.001586 34732 main.go:141] libmachine: (gvisor-579626) Calling .Close
I0307 18:43:06.001783 34732 main.go:141] libmachine: Successfully made call to close driver server
I0307 18:43:06.001801 34732 main.go:141] libmachine: Making call to close connection to plugin binary
I0307 18:43:06.001818 34732 start.go:233] waiting for cluster config update ...
I0307 18:43:06.001830 34732 start.go:242] writing updated cluster config ...
I0307 18:43:06.002146 34732 ssh_runner.go:195] Run: rm -f paused
I0307 18:43:06.065245 34732 start.go:555] kubectl: 1.26.2, cluster: 1.26.2 (minor skew: 0)
I0307 18:43:06.067757 34732 out.go:177] * Done! kubectl is now configured to use "gvisor-579626" cluster and "default" namespace by default
I0307 18:43:05.844929 35395 api_server.go:268] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:43:06.345721 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:43:05.939340 35647 ssh_runner.go:235] Completed: sudo systemctl restart docker: (8.80125487s)
I0307 18:43:05.939404 35647 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0307 18:43:06.065297 35647 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0307 18:43:06.193363 35647 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0307 18:43:06.332683 35647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 18:43:06.469120 35647 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0307 18:43:06.495698 35647 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0307 18:43:06.495762 35647 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0307 18:43:06.505061 35647 start.go:553] Will wait 60s for crictl version
I0307 18:43:06.505123 35647 ssh_runner.go:195] Run: which crictl
I0307 18:43:06.510228 35647 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0307 18:43:06.627109 35647 start.go:569] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.23
RuntimeApiVersion: v1alpha2
I0307 18:43:06.627173 35647 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0307 18:43:06.670729 35647 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0307 18:43:06.716707 35647 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 20.10.23 ...
I0307 18:43:06.716766 35647 main.go:141] libmachine: (cert-expiration-147721) Calling .GetIP
I0307 18:43:06.719717 35647 main.go:141] libmachine: (cert-expiration-147721) DBG | domain cert-expiration-147721 has defined MAC address 52:54:00:6a:e4:fa in network mk-cert-expiration-147721
I0307 18:43:06.720084 35647 main.go:141] libmachine: (cert-expiration-147721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:e4:fa", ip: ""} in network mk-cert-expiration-147721: {Iface:virbr4 ExpiryTime:2023-03-07 19:39:04 +0000 UTC Type:0 Mac:52:54:00:6a:e4:fa Iaid: IPaddr:192.168.72.251 Prefix:24 Hostname:cert-expiration-147721 Clientid:01:52:54:00:6a:e4:fa}
I0307 18:43:06.720107 35647 main.go:141] libmachine: (cert-expiration-147721) DBG | domain cert-expiration-147721 has defined IP address 192.168.72.251 and MAC address 52:54:00:6a:e4:fa in network mk-cert-expiration-147721
I0307 18:43:06.720307 35647 ssh_runner.go:195] Run: grep 192.168.72.1 host.minikube.internal$ /etc/hosts
I0307 18:43:06.726347 35647 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0307 18:43:06.726416 35647 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0307 18:43:06.755996 35647 docker.go:630] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0307 18:43:06.756010 35647 docker.go:560] Images already preloaded, skipping extraction
I0307 18:43:06.756073 35647 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0307 18:43:06.787617 35647 docker.go:630] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0307 18:43:06.787630 35647 cache_images.go:84] Images are preloaded, skipping loading
I0307 18:43:06.787697 35647 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0307 18:43:06.828544 35647 cni.go:84] Creating CNI manager for ""
I0307 18:43:06.828567 35647 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0307 18:43:06.828577 35647 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0307 18:43:06.828595 35647 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.251 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-147721 NodeName:cert-expiration-147721 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0307 18:43:06.828772 35647 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.72.251
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "cert-expiration-147721"
kubeletExtraArgs:
node-ip: 192.168.72.251
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.72.251"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0307 18:43:06.828862 35647 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=cert-expiration-147721 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.251
[Install]
config:
{KubernetesVersion:v1.26.2 ClusterName:cert-expiration-147721 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0307 18:43:06.828953 35647 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
I0307 18:43:06.841781 35647 binaries.go:44] Found k8s binaries, skipping transfer
I0307 18:43:06.841850 35647 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0307 18:43:06.851383 35647 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (456 bytes)
I0307 18:43:06.869504 35647 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0307 18:43:06.887699 35647 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
I0307 18:43:06.907514 35647 ssh_runner.go:195] Run: grep 192.168.72.251 control-plane.minikube.internal$ /etc/hosts
I0307 18:43:06.911814 35647 certs.go:56] Setting up /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721 for IP: 192.168.72.251
I0307 18:43:06.911839 35647 certs.go:186] acquiring lock for shared ca certs: {Name:mk09f52d1213ecfb949f8e2d1f9b4b7cd7194c22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:43:06.912023 35647 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15985-4059/.minikube/ca.key
I0307 18:43:06.912090 35647 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15985-4059/.minikube/proxy-client-ca.key
W0307 18:43:06.912259 35647 out.go:239] ! Certificate client.crt has expired. Generating a new one...
I0307 18:43:06.912285 35647 certs.go:540] cert expired /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/client.crt: expiration: 2023-03-07 18:42:27 +0000 UTC, now: 2023-03-07 18:43:06.912279559 +0000 UTC m=+12.649020806
I0307 18:43:06.912412 35647 certs.go:315] generating minikube-user signed cert: /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/client.key
I0307 18:43:06.912430 35647 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/client.crt with IP's: []
I0307 18:43:07.238692 35647 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/client.crt ...
I0307 18:43:07.238705 35647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/client.crt: {Name:mk7dd6d137a9fac9aa9dc5b8ed2cee5115af5368 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:43:07.238877 35647 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/client.key ...
I0307 18:43:07.238885 35647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/client.key: {Name:mk3e0e628efa5eede096b1091f5bbf3375f267b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
W0307 18:43:07.239165 35647 out.go:239] ! Certificate apiserver.crt.68790b64 has expired. Generating a new one...
I0307 18:43:07.239193 35647 certs.go:540] cert expired /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.crt.68790b64: expiration: 2023-03-07 18:42:28 +0000 UTC, now: 2023-03-07 18:43:07.239186072 +0000 UTC m=+12.975927317
I0307 18:43:07.239304 35647 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.key.68790b64
I0307 18:43:07.239316 35647 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.crt.68790b64 with IP's: [192.168.72.251 10.96.0.1 127.0.0.1 10.0.0.1]
I0307 18:43:07.393175 35647 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.crt.68790b64 ...
I0307 18:43:07.393188 35647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.crt.68790b64: {Name:mk35e086ffb5fbed77fb9b8f548e75dd765b6d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:43:07.393315 35647 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.key.68790b64 ...
I0307 18:43:07.393327 35647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.key.68790b64: {Name:mkf94da4452cbc1aea028a2f24482af737c1be79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:43:07.393386 35647 certs.go:333] copying /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.crt.68790b64 -> /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.crt
I0307 18:43:07.393548 35647 certs.go:337] copying /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.key.68790b64 -> /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.key
W0307 18:43:07.393782 35647 out.go:239] ! Certificate proxy-client.crt has expired. Generating a new one...
I0307 18:43:07.393803 35647 certs.go:540] cert expired /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/proxy-client.crt: expiration: 2023-03-07 18:42:28 +0000 UTC, now: 2023-03-07 18:43:07.393798378 +0000 UTC m=+13.130539625
I0307 18:43:07.393879 35647 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/proxy-client.key
I0307 18:43:07.393889 35647 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/proxy-client.crt with IP's: []
I0307 18:43:07.624908 35647 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/proxy-client.crt ...
I0307 18:43:07.624924 35647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/proxy-client.crt: {Name:mkb44c5bbc9dd5fd01183cdbb904d2b334c279c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:43:07.625080 35647 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/proxy-client.key ...
I0307 18:43:07.625089 35647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/proxy-client.key: {Name:mk9bc3c6f41698598c96e2c121ea1e42e977e6ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:43:07.625296 35647 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4059/.minikube/certs/home/jenkins/minikube-integration/15985-4059/.minikube/certs/11114.pem (1338 bytes)
W0307 18:43:07.625334 35647 certs.go:397] ignoring /home/jenkins/minikube-integration/15985-4059/.minikube/certs/home/jenkins/minikube-integration/15985-4059/.minikube/certs/11114_empty.pem, impossibly tiny 0 bytes
I0307 18:43:07.625340 35647 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4059/.minikube/certs/home/jenkins/minikube-integration/15985-4059/.minikube/certs/ca-key.pem (1675 bytes)
I0307 18:43:07.625361 35647 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4059/.minikube/certs/home/jenkins/minikube-integration/15985-4059/.minikube/certs/ca.pem (1078 bytes)
I0307 18:43:07.625380 35647 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4059/.minikube/certs/home/jenkins/minikube-integration/15985-4059/.minikube/certs/cert.pem (1123 bytes)
I0307 18:43:07.625399 35647 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4059/.minikube/certs/home/jenkins/minikube-integration/15985-4059/.minikube/certs/key.pem (1675 bytes)
I0307 18:43:07.625437 35647 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4059/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15985-4059/.minikube/files/etc/ssl/certs/111142.pem (1708 bytes)
I0307 18:43:07.625991 35647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0307 18:43:07.654818 35647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0307 18:43:07.680164 35647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0307 18:43:07.710330 35647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/cert-expiration-147721/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0307 18:43:07.739095 35647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0307 18:43:07.764762 35647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0307 18:43:07.795399 35647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0307 18:43:07.820196 35647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0307 18:43:07.847252 35647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/certs/11114.pem --> /usr/share/ca-certificates/11114.pem (1338 bytes)
I0307 18:43:07.873684 35647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/files/etc/ssl/certs/111142.pem --> /usr/share/ca-certificates/111142.pem (1708 bytes)
I0307 18:43:07.900738 35647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0307 18:43:07.935826 35647 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0307 18:43:07.953838 35647 ssh_runner.go:195] Run: openssl version
I0307 18:43:07.960262 35647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0307 18:43:07.972972 35647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0307 18:43:07.978184 35647 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 7 18:02 /usr/share/ca-certificates/minikubeCA.pem
I0307 18:43:07.978257 35647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0307 18:43:07.984561 35647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0307 18:43:07.994205 35647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11114.pem && ln -fs /usr/share/ca-certificates/11114.pem /etc/ssl/certs/11114.pem"
I0307 18:43:08.004804 35647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11114.pem
I0307 18:43:08.010140 35647 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 7 18:06 /usr/share/ca-certificates/11114.pem
I0307 18:43:08.010190 35647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11114.pem
I0307 18:43:08.016237 35647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11114.pem /etc/ssl/certs/51391683.0"
I0307 18:43:08.025251 35647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111142.pem && ln -fs /usr/share/ca-certificates/111142.pem /etc/ssl/certs/111142.pem"
I0307 18:43:08.035839 35647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111142.pem
I0307 18:43:08.041071 35647 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 7 18:06 /usr/share/ca-certificates/111142.pem
I0307 18:43:08.041115 35647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111142.pem
I0307 18:43:08.047366 35647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111142.pem /etc/ssl/certs/3ec20f2e.0"
I0307 18:43:08.066653 35647 kubeadm.go:401] StartCluster: {Name:cert-expiration-147721 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.26.2 ClusterName:cert-expiration-147721 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.251 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0307 18:43:08.066797 35647 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0307 18:43:08.181439 35647 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0307 18:43:08.213439 35647 kubeadm.go:416] found existing configuration files, will attempt cluster restart
I0307 18:43:08.213450 35647 kubeadm.go:633] restartCluster start
I0307 18:43:08.213506 35647 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0307 18:43:08.251497 35647 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0307 18:43:08.252487 35647 kubeconfig.go:92] found "cert-expiration-147721" server: "https://192.168.72.251:8443"
I0307 18:43:08.255052 35647 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0307 18:43:08.269434 35647 api_server.go:165] Checking apiserver status ...
I0307 18:43:08.269493 35647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:43:08.319926 35647 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:43:08.820581 35647 api_server.go:165] Checking apiserver status ...
I0307 18:43:08.820650 35647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:43:08.841340 35647 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:43:08.331961 35395 api_server.go:278] https://192.168.61.47:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0307 18:43:08.331991 35395 api_server.go:102] status: https://192.168.61.47:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0307 18:43:08.345106 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:43:08.419012 35395 api_server.go:278] https://192.168.61.47:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0307 18:43:08.419042 35395 api_server.go:102] status: https://192.168.61.47:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0307 18:43:08.845575 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:43:08.851385 35395 api_server.go:278] https://192.168.61.47:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0307 18:43:08.851415 35395 api_server.go:102] status: https://192.168.61.47:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0307 18:43:09.345055 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:43:09.357002 35395 api_server.go:278] https://192.168.61.47:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0307 18:43:09.357043 35395 api_server.go:102] status: https://192.168.61.47:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0307 18:43:09.845368 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:43:09.852655 35395 api_server.go:278] https://192.168.61.47:8443/healthz returned 200:
ok
I0307 18:43:09.864843 35395 api_server.go:140] control plane version: v1.26.2
I0307 18:43:09.864865 35395 api_server.go:130] duration metric: took 9.020231544s to wait for apiserver health ...
I0307 18:43:09.864873 35395 cni.go:84] Creating CNI manager for ""
I0307 18:43:09.864883 35395 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0307 18:43:09.866920 35395 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0307 18:43:05.542221 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .Start
I0307 18:43:05.542407 35824 main.go:141] libmachine: (NoKubernetes-015933) Ensuring networks are active...
I0307 18:43:05.543087 35824 main.go:141] libmachine: (NoKubernetes-015933) Ensuring network default is active
I0307 18:43:05.543533 35824 main.go:141] libmachine: (NoKubernetes-015933) Ensuring network mk-NoKubernetes-015933 is active
I0307 18:43:05.543944 35824 main.go:141] libmachine: (NoKubernetes-015933) Getting domain xml...
I0307 18:43:05.544794 35824 main.go:141] libmachine: (NoKubernetes-015933) Creating domain...
I0307 18:43:06.993870 35824 main.go:141] libmachine: (NoKubernetes-015933) Waiting to get IP...
I0307 18:43:06.994827 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:06.995260 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:06.995381 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:06.995259 35858 retry.go:31] will retry after 236.851886ms: waiting for machine to come up
I0307 18:43:07.233789 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:07.234431 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:07.234452 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:07.234380 35858 retry.go:31] will retry after 278.375019ms: waiting for machine to come up
I0307 18:43:07.515011 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:07.515526 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:07.515549 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:07.515494 35858 retry.go:31] will retry after 400.10884ms: waiting for machine to come up
I0307 18:43:07.919862 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:07.920356 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:07.920379 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:07.920282 35858 retry.go:31] will retry after 473.496382ms: waiting for machine to come up
I0307 18:43:08.394991 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:08.395902 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:08.395925 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:08.395747 35858 retry.go:31] will retry after 718.678081ms: waiting for machine to come up
I0307 18:43:09.116025 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:09.116516 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:09.116534 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:09.116467 35858 retry.go:31] will retry after 712.04316ms: waiting for machine to come up
I0307 18:43:09.830438 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:09.831101 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:09.831116 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:09.831034 35858 retry.go:31] will retry after 815.034437ms: waiting for machine to come up
I0307 18:43:09.868328 35395 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0307 18:43:09.880058 35395 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0307 18:43:09.905795 35395 system_pods.go:43] waiting for kube-system pods to appear ...
I0307 18:43:09.917255 35395 system_pods.go:59] 6 kube-system pods found
I0307 18:43:09.917295 35395 system_pods.go:61] "coredns-787d4945fb-n77tj" [e63f9141-89ed-4e4d-b1aa-86ad76074f81] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0307 18:43:09.917306 35395 system_pods.go:61] "etcd-pause-763583" [1443cb3f-e768-40cb-8959-b07a77a9b089] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0307 18:43:09.917314 35395 system_pods.go:61] "kube-apiserver-pause-763583" [21663669-cac7-48c2-9107-e69979cee194] Running
I0307 18:43:09.917324 35395 system_pods.go:61] "kube-controller-manager-pause-763583" [e00cf98f-3435-4f3c-b91c-c00a0b794b06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0307 18:43:09.917331 35395 system_pods.go:61] "kube-proxy-89rb5" [1976b181-14ab-48a2-bb64-2eb3b1ecf436] Running
I0307 18:43:09.917340 35395 system_pods.go:61] "kube-scheduler-pause-763583" [b495d084-0581-4e14-917f-e44a0bf077df] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0307 18:43:09.917348 35395 system_pods.go:74] duration metric: took 11.53195ms to wait for pod list to return data ...
I0307 18:43:09.917360 35395 node_conditions.go:102] verifying NodePressure condition ...
I0307 18:43:09.921137 35395 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 18:43:09.921167 35395 node_conditions.go:123] node cpu capacity is 2
I0307 18:43:09.921179 35395 node_conditions.go:105] duration metric: took 3.813699ms to run NodePressure ...
I0307 18:43:09.921196 35395 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0307 18:43:10.361537 35395 kubeadm.go:769] waiting for restarted kubelet to initialise ...
I0307 18:43:10.372040 35395 kubeadm.go:784] kubelet initialised
I0307 18:43:10.372068 35395 kubeadm.go:785] duration metric: took 10.499059ms waiting for restarted kubelet to initialise ...
I0307 18:43:10.372079 35395 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 18:43:10.378716 35395 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-n77tj" in "kube-system" namespace to be "Ready" ...
I0307 18:43:09.320740 35647 api_server.go:165] Checking apiserver status ...
I0307 18:43:09.320817 35647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:43:09.351484 35647 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:43:09.820591 35647 api_server.go:165] Checking apiserver status ...
I0307 18:43:09.820663 35647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:43:09.839689 35647 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:43:10.320286 35647 api_server.go:165] Checking apiserver status ...
I0307 18:43:10.320357 35647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:43:10.345958 35647 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6839/cgroup
I0307 18:43:10.368813 35647 api_server.go:181] apiserver freezer: "6:freezer:/kubepods/burstable/pod27f9e58fc6ec6edf1ea39105aa6696fa/6c03d90a8397a8fa5aa39be0711b590bb5b798d9382f75eed513cd2c1fa9ce4c"
I0307 18:43:10.368868 35647 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod27f9e58fc6ec6edf1ea39105aa6696fa/6c03d90a8397a8fa5aa39be0711b590bb5b798d9382f75eed513cd2c1fa9ce4c/freezer.state
I0307 18:43:10.392712 35647 api_server.go:203] freezer state: "THAWED"
I0307 18:43:10.392729 35647 api_server.go:252] Checking apiserver healthz at https://192.168.72.251:8443/healthz ...
I0307 18:43:10.393233 35647 api_server.go:268] stopped: https://192.168.72.251:8443/healthz: Get "https://192.168.72.251:8443/healthz": dial tcp 192.168.72.251:8443: connect: connection refused
I0307 18:43:10.393292 35647 retry.go:31] will retry after 269.021468ms: state is "Stopped"
I0307 18:43:10.662692 35647 api_server.go:252] Checking apiserver healthz at https://192.168.72.251:8443/healthz ...
I0307 18:43:10.648260 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:10.648812 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:10.648836 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:10.648763 35858 retry.go:31] will retry after 902.381464ms: waiting for machine to come up
I0307 18:43:11.552569 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:11.553001 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:11.553050 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:11.552975 35858 retry.go:31] will retry after 1.729563855s: waiting for machine to come up
I0307 18:43:13.284547 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:13.285003 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:13.285020 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:13.284952 35858 retry.go:31] will retry after 1.828287492s: waiting for machine to come up
I0307 18:43:15.115893 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:15.116428 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:15.116451 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:15.116366 35858 retry.go:31] will retry after 2.036951585s: waiting for machine to come up
I0307 18:43:12.397973 35395 pod_ready.go:102] pod "coredns-787d4945fb-n77tj" in "kube-system" namespace has status "Ready":"False"
I0307 18:43:13.893974 35395 pod_ready.go:92] pod "coredns-787d4945fb-n77tj" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:13.894011 35395 pod_ready.go:81] duration metric: took 3.515265891s waiting for pod "coredns-787d4945fb-n77tj" in "kube-system" namespace to be "Ready" ...
I0307 18:43:13.894023 35395 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:15.906677 35395 pod_ready.go:102] pod "etcd-pause-763583" in "kube-system" namespace has status "Ready":"False"
I0307 18:43:15.663672 35647 api_server.go:268] stopped: https://192.168.72.251:8443/healthz: Get "https://192.168.72.251:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:43:15.663706 35647 retry.go:31] will retry after 291.952467ms: state is "Stopped"
I0307 18:43:15.956265 35647 api_server.go:252] Checking apiserver healthz at https://192.168.72.251:8443/healthz ...
I0307 18:43:17.154512 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:17.155039 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:17.155060 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:17.154983 35858 retry.go:31] will retry after 3.605137674s: waiting for machine to come up
I0307 18:43:17.907680 35395 pod_ready.go:102] pod "etcd-pause-763583" in "kube-system" namespace has status "Ready":"False"
I0307 18:43:20.408465 35395 pod_ready.go:102] pod "etcd-pause-763583" in "kube-system" namespace has status "Ready":"False"
I0307 18:43:20.928474 35395 pod_ready.go:92] pod "etcd-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:20.928514 35395 pod_ready.go:81] duration metric: took 7.034481751s waiting for pod "etcd-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.928528 35395 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.941150 35395 pod_ready.go:92] pod "kube-apiserver-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:20.941180 35395 pod_ready.go:81] duration metric: took 12.642904ms waiting for pod "kube-apiserver-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.941195 35395 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.957243 35395 pod_ready.go:92] pod "kube-controller-manager-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:20.957270 35395 pod_ready.go:81] duration metric: took 16.065823ms waiting for pod "kube-controller-manager-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.957283 35395 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-89rb5" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.965242 35395 pod_ready.go:92] pod "kube-proxy-89rb5" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:20.965267 35395 pod_ready.go:81] duration metric: took 7.976082ms waiting for pod "kube-proxy-89rb5" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.965306 35395 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.975928 35395 pod_ready.go:92] pod "kube-scheduler-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:20.975957 35395 pod_ready.go:81] duration metric: took 10.639966ms waiting for pod "kube-scheduler-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.975967 35395 pod_ready.go:38] duration metric: took 10.603878883s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 18:43:20.975987 35395 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0307 18:43:20.998476 35395 ops.go:34] apiserver oom_adj: -16
I0307 18:43:20.998505 35395 kubeadm.go:637] restartCluster took 57.365787501s
I0307 18:43:20.998514 35395 kubeadm.go:403] StartCluster complete in 57.398734635s
I0307 18:43:20.998566 35395 settings.go:142] acquiring lock: {Name:mk59ca7946d8ca96e1c1529d6dc9eeaf833467d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:43:20.998642 35395 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15985-4059/kubeconfig
I0307 18:43:21.000304 35395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-4059/kubeconfig: {Name:mkdbb63ccb2062c9fe0a4f6a1ffae1d7c12177ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:43:21.001531 35395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0307 18:43:21.001623 35395 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
I0307 18:43:21.004334 35395 out.go:177] * Enabled addons:
I0307 18:43:21.002125 35395 kapi.go:59] client config for pause-763583: &rest.Config{Host:"https://192.168.61.47:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15985-4059/.minikube/profiles/pause-763583/client.crt", KeyFile:"/home/jenkins/minikube-integration/15985-4059/.minikube/profiles/pause-763583/client.key", CAFile:"/home/jenkins/minikube-integration/15985-4059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29a5480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0307 18:43:21.002293 35395 config.go:182] Loaded profile config "pause-763583": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 18:43:21.006591 35395 cache.go:107] acquiring lock: {Name:mk4b4b9e8ae74bfe37a64a243ec4cf9219f62ba4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 18:43:21.006679 35395 cache.go:115] /home/jenkins/minikube-integration/15985-4059/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
I0307 18:43:21.006696 35395 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/15985-4059/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 119.684µs
I0307 18:43:21.006712 35395 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/15985-4059/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
I0307 18:43:21.006718 35395 cache.go:87] Successfully saved all images to host disk.
I0307 18:43:21.006956 35395 config.go:182] Loaded profile config "pause-763583": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0307 18:43:21.006990 35395 addons.go:499] enable addons completed in 5.361604ms: enabled=[]
I0307 18:43:21.007421 35395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0307 18:43:21.007482 35395 main.go:141] libmachine: Launching plugin server for driver kvm2
I0307 18:43:21.014221 35395 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-763583" context rescaled to 1 replicas
I0307 18:43:21.014275 35395 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.47 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0307 18:43:21.016181 35395 out.go:177] * Verifying Kubernetes components...
I0307 18:43:21.018159 35395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0307 18:43:21.027697 35395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39769
I0307 18:43:21.028242 35395 main.go:141] libmachine: () Calling .GetVersion
I0307 18:43:21.029001 35395 main.go:141] libmachine: Using API Version 1
I0307 18:43:21.029022 35395 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 18:43:21.029419 35395 main.go:141] libmachine: () Calling .GetMachineName
I0307 18:43:21.029621 35395 main.go:141] libmachine: (pause-763583) Calling .GetState
I0307 18:43:21.032078 35395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0307 18:43:21.032110 35395 main.go:141] libmachine: Launching plugin server for driver kvm2
I0307 18:43:21.055351 35395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38507
I0307 18:43:21.056115 35395 main.go:141] libmachine: () Calling .GetVersion
I0307 18:43:21.056974 35395 main.go:141] libmachine: Using API Version 1
I0307 18:43:21.057002 35395 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 18:43:21.057408 35395 main.go:141] libmachine: () Calling .GetMachineName
I0307 18:43:21.057644 35395 main.go:141] libmachine: (pause-763583) Calling .DriverName
I0307 18:43:21.057948 35395 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0307 18:43:21.057985 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHHostname
I0307 18:43:21.061960 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:43:21.062537 35395 main.go:141] libmachine: (pause-763583) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7e:f8", ip: ""} in network mk-pause-763583: {Iface:virbr3 ExpiryTime:2023-03-07 19:40:49 +0000 UTC Type:0 Mac:52:54:00:7d:7e:f8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:pause-763583 Clientid:01:52:54:00:7d:7e:f8}
I0307 18:43:21.062562 35395 main.go:141] libmachine: (pause-763583) DBG | domain pause-763583 has defined IP address 192.168.61.47 and MAC address 52:54:00:7d:7e:f8 in network mk-pause-763583
I0307 18:43:21.062887 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHPort
I0307 18:43:21.064168 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHKeyPath
I0307 18:43:21.064368 35395 main.go:141] libmachine: (pause-763583) Calling .GetSSHUsername
I0307 18:43:21.064473 35395 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4059/.minikube/machines/pause-763583/id_rsa Username:docker}
I0307 18:43:21.258566 35395 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0307 18:43:21.258560 35395 node_ready.go:35] waiting up to 6m0s for node "pause-763583" to be "Ready" ...
I0307 18:43:21.258634 35395 docker.go:630] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0307 18:43:21.258659 35395 cache_images.go:84] Images are preloaded, skipping loading
I0307 18:43:21.258669 35395 cache_images.go:262] succeeded pushing to: pause-763583
I0307 18:43:21.258676 35395 cache_images.go:263] failed pushing to:
I0307 18:43:21.258695 35395 main.go:141] libmachine: Making call to close driver server
I0307 18:43:21.258709 35395 main.go:141] libmachine: (pause-763583) Calling .Close
I0307 18:43:21.259058 35395 main.go:141] libmachine: Successfully made call to close driver server
I0307 18:43:21.259078 35395 main.go:141] libmachine: Making call to close connection to plugin binary
I0307 18:43:21.259092 35395 main.go:141] libmachine: Making call to close driver server
I0307 18:43:21.259099 35395 main.go:141] libmachine: (pause-763583) Calling .Close
I0307 18:43:21.259506 35395 main.go:141] libmachine: Successfully made call to close driver server
I0307 18:43:21.259526 35395 main.go:141] libmachine: Making call to close connection to plugin binary
I0307 18:43:21.262793 35395 node_ready.go:49] node "pause-763583" has status "Ready":"True"
I0307 18:43:21.262815 35395 node_ready.go:38] duration metric: took 4.223701ms waiting for node "pause-763583" to be "Ready" ...
I0307 18:43:21.262826 35395 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 18:43:21.312065 35395 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-n77tj" in "kube-system" namespace to be "Ready" ...
I0307 18:43:20.957534 35647 api_server.go:268] stopped: https://192.168.72.251:8443/healthz: Get "https://192.168.72.251:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:43:20.957562 35647 retry.go:31] will retry after 326.602628ms: state is "Stopped"
I0307 18:43:21.285042 35647 api_server.go:252] Checking apiserver healthz at https://192.168.72.251:8443/healthz ...
I0307 18:43:21.703398 35395 pod_ready.go:92] pod "coredns-787d4945fb-n77tj" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:21.703420 35395 pod_ready.go:81] duration metric: took 391.325074ms waiting for pod "coredns-787d4945fb-n77tj" in "kube-system" namespace to be "Ready" ...
I0307 18:43:21.703430 35395 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:22.102894 35395 pod_ready.go:92] pod "etcd-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:22.102915 35395 pod_ready.go:81] duration metric: took 399.479275ms waiting for pod "etcd-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:22.102924 35395 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:22.503718 35395 pod_ready.go:92] pod "kube-apiserver-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:22.503740 35395 pod_ready.go:81] duration metric: took 400.810203ms waiting for pod "kube-apiserver-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:22.503753 35395 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:22.904142 35395 pod_ready.go:92] pod "kube-controller-manager-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:22.904164 35395 pod_ready.go:81] duration metric: took 400.403865ms waiting for pod "kube-controller-manager-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:22.904174 35395 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-89rb5" in "kube-system" namespace to be "Ready" ...
I0307 18:43:23.303244 35395 pod_ready.go:92] pod "kube-proxy-89rb5" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:23.303263 35395 pod_ready.go:81] duration metric: took 399.083446ms waiting for pod "kube-proxy-89rb5" in "kube-system" namespace to be "Ready" ...
I0307 18:43:23.303278 35395 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:23.704268 35395 pod_ready.go:92] pod "kube-scheduler-pause-763583" in "kube-system" namespace has status "Ready":"True"
I0307 18:43:23.704288 35395 pod_ready.go:81] duration metric: took 401.005104ms waiting for pod "kube-scheduler-pause-763583" in "kube-system" namespace to be "Ready" ...
I0307 18:43:23.704295 35395 pod_ready.go:38] duration metric: took 2.441458878s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0307 18:43:23.704311 35395 api_server.go:51] waiting for apiserver process to appear ...
I0307 18:43:23.704349 35395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:43:23.716827 35395 api_server.go:71] duration metric: took 2.702510753s to wait for apiserver process to appear ...
I0307 18:43:23.716856 35395 api_server.go:87] waiting for apiserver healthz status ...
I0307 18:43:23.716868 35395 api_server.go:252] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0307 18:43:23.723857 35395 api_server.go:278] https://192.168.61.47:8443/healthz returned 200:
ok
I0307 18:43:23.724750 35395 api_server.go:140] control plane version: v1.26.2
I0307 18:43:23.724768 35395 api_server.go:130] duration metric: took 7.905622ms to wait for apiserver health ...
I0307 18:43:23.724778 35395 system_pods.go:43] waiting for kube-system pods to appear ...
I0307 18:43:23.906704 35395 system_pods.go:59] 6 kube-system pods found
I0307 18:43:23.906737 35395 system_pods.go:61] "coredns-787d4945fb-n77tj" [e63f9141-89ed-4e4d-b1aa-86ad76074f81] Running
I0307 18:43:23.906745 35395 system_pods.go:61] "etcd-pause-763583" [1443cb3f-e768-40cb-8959-b07a77a9b089] Running
I0307 18:43:23.906752 35395 system_pods.go:61] "kube-apiserver-pause-763583" [21663669-cac7-48c2-9107-e69979cee194] Running
I0307 18:43:23.906759 35395 system_pods.go:61] "kube-controller-manager-pause-763583" [e00cf98f-3435-4f3c-b91c-c00a0b794b06] Running
I0307 18:43:23.906766 35395 system_pods.go:61] "kube-proxy-89rb5" [1976b181-14ab-48a2-bb64-2eb3b1ecf436] Running
I0307 18:43:23.906773 35395 system_pods.go:61] "kube-scheduler-pause-763583" [b495d084-0581-4e14-917f-e44a0bf077df] Running
I0307 18:43:23.906785 35395 system_pods.go:74] duration metric: took 182.000313ms to wait for pod list to return data ...
I0307 18:43:23.906799 35395 default_sa.go:34] waiting for default service account to be created ...
I0307 18:43:24.103036 35395 default_sa.go:45] found service account: "default"
I0307 18:43:24.103058 35395 default_sa.go:55] duration metric: took 196.253509ms for default service account to be created ...
I0307 18:43:24.103066 35395 system_pods.go:116] waiting for k8s-apps to be running ...
I0307 18:43:24.305992 35395 system_pods.go:86] 6 kube-system pods found
I0307 18:43:24.306020 35395 system_pods.go:89] "coredns-787d4945fb-n77tj" [e63f9141-89ed-4e4d-b1aa-86ad76074f81] Running
I0307 18:43:24.306025 35395 system_pods.go:89] "etcd-pause-763583" [1443cb3f-e768-40cb-8959-b07a77a9b089] Running
I0307 18:43:24.306029 35395 system_pods.go:89] "kube-apiserver-pause-763583" [21663669-cac7-48c2-9107-e69979cee194] Running
I0307 18:43:24.306033 35395 system_pods.go:89] "kube-controller-manager-pause-763583" [e00cf98f-3435-4f3c-b91c-c00a0b794b06] Running
I0307 18:43:24.306038 35395 system_pods.go:89] "kube-proxy-89rb5" [1976b181-14ab-48a2-bb64-2eb3b1ecf436] Running
I0307 18:43:24.306042 35395 system_pods.go:89] "kube-scheduler-pause-763583" [b495d084-0581-4e14-917f-e44a0bf077df] Running
I0307 18:43:24.306050 35395 system_pods.go:126] duration metric: took 202.978873ms to wait for k8s-apps to be running ...
I0307 18:43:24.306059 35395 system_svc.go:44] waiting for kubelet service to be running ....
I0307 18:43:24.306109 35395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0307 18:43:24.321158 35395 system_svc.go:56] duration metric: took 15.087082ms WaitForService to wait for kubelet.
I0307 18:43:24.321187 35395 kubeadm.go:578] duration metric: took 3.306875345s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0307 18:43:24.321209 35395 node_conditions.go:102] verifying NodePressure condition ...
I0307 18:43:24.505220 35395 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0307 18:43:24.505243 35395 node_conditions.go:123] node cpu capacity is 2
I0307 18:43:24.505252 35395 node_conditions.go:105] duration metric: took 184.038448ms to run NodePressure ...
I0307 18:43:24.505262 35395 start.go:228] waiting for startup goroutines ...
I0307 18:43:24.505268 35395 start.go:233] waiting for cluster config update ...
I0307 18:43:24.505274 35395 start.go:242] writing updated cluster config ...
I0307 18:43:24.505561 35395 ssh_runner.go:195] Run: rm -f paused
I0307 18:43:24.559116 35395 start.go:555] kubectl: 1.26.2, cluster: 1.26.2 (minor skew: 0)
I0307 18:43:24.561247 35395 out.go:177] * Done! kubectl is now configured to use "pause-763583" cluster and "default" namespace by default
I0307 18:43:20.761881 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:20.762402 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | unable to find current IP address of domain NoKubernetes-015933 in network mk-NoKubernetes-015933
I0307 18:43:20.762459 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | I0307 18:43:20.762380 35858 retry.go:31] will retry after 3.824213466s: waiting for machine to come up
I0307 18:43:24.588820 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:24.589321 35824 main.go:141] libmachine: (NoKubernetes-015933) Found IP for machine: 192.168.50.31
I0307 18:43:24.589333 35824 main.go:141] libmachine: (NoKubernetes-015933) Reserving static IP address...
I0307 18:43:24.589370 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has current primary IP address 192.168.50.31 and MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:24.589911 35824 main.go:141] libmachine: (NoKubernetes-015933) Reserved static IP address: 192.168.50.31
I0307 18:43:24.589931 35824 main.go:141] libmachine: (NoKubernetes-015933) Waiting for SSH to be available...
I0307 18:43:24.589968 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | found host DHCP lease matching {name: "NoKubernetes-015933", mac: "52:54:00:b1:c0:4b", ip: "192.168.50.31"} in network mk-NoKubernetes-015933: {Iface:virbr2 ExpiryTime:2023-03-07 19:41:53 +0000 UTC Type:0 Mac:52:54:00:b1:c0:4b Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:NoKubernetes-015933 Clientid:01:52:54:00:b1:c0:4b}
I0307 18:43:24.589986 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | skip adding static IP to network mk-NoKubernetes-015933 - found existing host DHCP lease matching {name: "NoKubernetes-015933", mac: "52:54:00:b1:c0:4b", ip: "192.168.50.31"}
I0307 18:43:24.589995 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | Getting to WaitForSSH function...
I0307 18:43:24.592933 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:24.593299 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:c0:4b", ip: ""} in network mk-NoKubernetes-015933: {Iface:virbr2 ExpiryTime:2023-03-07 19:41:53 +0000 UTC Type:0 Mac:52:54:00:b1:c0:4b Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:NoKubernetes-015933 Clientid:01:52:54:00:b1:c0:4b}
I0307 18:43:24.593321 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined IP address 192.168.50.31 and MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:24.593438 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | Using SSH client type: external
I0307 18:43:24.593451 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | Using SSH private key: /home/jenkins/minikube-integration/15985-4059/.minikube/machines/NoKubernetes-015933/id_rsa (-rw-------)
I0307 18:43:24.593480 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15985-4059/.minikube/machines/NoKubernetes-015933/id_rsa -p 22] /usr/bin/ssh <nil>}
I0307 18:43:24.593488 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | About to run SSH command:
I0307 18:43:24.593499 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | exit 0
I0307 18:43:24.696566 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | SSH cmd err, output: <nil>:
I0307 18:43:24.696913 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetConfigRaw
I0307 18:43:24.697583 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetIP
I0307 18:43:24.700800 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:24.701238 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:c0:4b", ip: ""} in network mk-NoKubernetes-015933: {Iface:virbr2 ExpiryTime:2023-03-07 19:41:53 +0000 UTC Type:0 Mac:52:54:00:b1:c0:4b Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:NoKubernetes-015933 Clientid:01:52:54:00:b1:c0:4b}
I0307 18:43:24.701275 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined IP address 192.168.50.31 and MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:24.701589 35824 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-4059/.minikube/profiles/NoKubernetes-015933/config.json ...
I0307 18:43:24.701820 35824 machine.go:88] provisioning docker machine ...
I0307 18:43:24.701842 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .DriverName
I0307 18:43:24.702031 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetMachineName
I0307 18:43:24.702239 35824 buildroot.go:166] provisioning hostname "NoKubernetes-015933"
I0307 18:43:24.702277 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetMachineName
I0307 18:43:24.702461 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetSSHHostname
I0307 18:43:24.705300 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:24.705688 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:c0:4b", ip: ""} in network mk-NoKubernetes-015933: {Iface:virbr2 ExpiryTime:2023-03-07 19:41:53 +0000 UTC Type:0 Mac:52:54:00:b1:c0:4b Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:NoKubernetes-015933 Clientid:01:52:54:00:b1:c0:4b}
I0307 18:43:24.705706 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined IP address 192.168.50.31 and MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:24.705893 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetSSHPort
I0307 18:43:24.706048 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetSSHKeyPath
I0307 18:43:24.706225 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetSSHKeyPath
I0307 18:43:24.706383 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetSSHUsername
I0307 18:43:24.706573 35824 main.go:141] libmachine: Using SSH client type: native
I0307 18:43:24.707216 35824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil> [] 0s} 192.168.50.31 22 <nil> <nil>}
I0307 18:43:24.707229 35824 main.go:141] libmachine: About to run SSH command:
sudo hostname NoKubernetes-015933 && echo "NoKubernetes-015933" | sudo tee /etc/hostname
I0307 18:43:24.852960 35824 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-015933
I0307 18:43:24.852991 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetSSHHostname
I0307 18:43:24.857853 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:24.858332 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:c0:4b", ip: ""} in network mk-NoKubernetes-015933: {Iface:virbr2 ExpiryTime:2023-03-07 19:41:53 +0000 UTC Type:0 Mac:52:54:00:b1:c0:4b Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:NoKubernetes-015933 Clientid:01:52:54:00:b1:c0:4b}
I0307 18:43:24.858358 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined IP address 192.168.50.31 and MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:24.858529 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetSSHPort
I0307 18:43:24.858746 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetSSHKeyPath
I0307 18:43:24.858943 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetSSHKeyPath
I0307 18:43:24.859125 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetSSHUsername
I0307 18:43:24.859327 35824 main.go:141] libmachine: Using SSH client type: native
I0307 18:43:24.859977 35824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil> [] 0s} 192.168.50.31 22 <nil> <nil>}
I0307 18:43:24.859997 35824 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sNoKubernetes-015933' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-015933/g' /etc/hosts;
else
echo '127.0.1.1 NoKubernetes-015933' | sudo tee -a /etc/hosts;
fi
fi
I0307 18:43:25.000454 35824 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0307 18:43:25.000467 35824 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15985-4059/.minikube CaCertPath:/home/jenkins/minikube-integration/15985-4059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15985-4059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15985-4059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15985-4059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15985-4059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15985-4059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15985-4059/.minikube}
I0307 18:43:25.000491 35824 buildroot.go:174] setting up certificates
I0307 18:43:25.000498 35824 provision.go:83] configureAuth start
I0307 18:43:25.000504 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetMachineName
I0307 18:43:25.000880 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetIP
I0307 18:43:25.004172 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:25.004576 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:c0:4b", ip: ""} in network mk-NoKubernetes-015933: {Iface:virbr2 ExpiryTime:2023-03-07 19:41:53 +0000 UTC Type:0 Mac:52:54:00:b1:c0:4b Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:NoKubernetes-015933 Clientid:01:52:54:00:b1:c0:4b}
I0307 18:43:25.004607 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined IP address 192.168.50.31 and MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:25.004754 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetSSHHostname
I0307 18:43:25.007842 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:25.008095 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:c0:4b", ip: ""} in network mk-NoKubernetes-015933: {Iface:virbr2 ExpiryTime:2023-03-07 19:41:53 +0000 UTC Type:0 Mac:52:54:00:b1:c0:4b Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:NoKubernetes-015933 Clientid:01:52:54:00:b1:c0:4b}
I0307 18:43:25.008105 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined IP address 192.168.50.31 and MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:25.008243 35824 provision.go:138] copyHostCerts
I0307 18:43:25.008287 35824 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-4059/.minikube/ca.pem, removing ...
I0307 18:43:25.008293 35824 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-4059/.minikube/ca.pem
I0307 18:43:25.008348 35824 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-4059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15985-4059/.minikube/ca.pem (1078 bytes)
I0307 18:43:25.008444 35824 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-4059/.minikube/cert.pem, removing ...
I0307 18:43:25.008447 35824 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-4059/.minikube/cert.pem
I0307 18:43:25.008468 35824 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-4059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15985-4059/.minikube/cert.pem (1123 bytes)
I0307 18:43:25.008509 35824 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-4059/.minikube/key.pem, removing ...
I0307 18:43:25.008511 35824 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-4059/.minikube/key.pem
I0307 18:43:25.008525 35824 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-4059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15985-4059/.minikube/key.pem (1675 bytes)
I0307 18:43:25.008570 35824 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15985-4059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15985-4059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15985-4059/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-015933 san=[192.168.50.31 192.168.50.31 localhost 127.0.0.1 minikube NoKubernetes-015933]
I0307 18:43:25.345022 35824 provision.go:172] copyRemoteCerts
I0307 18:43:25.345072 35824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0307 18:43:25.345095 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetSSHHostname
I0307 18:43:25.347808 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:25.348245 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:c0:4b", ip: ""} in network mk-NoKubernetes-015933: {Iface:virbr2 ExpiryTime:2023-03-07 19:41:53 +0000 UTC Type:0 Mac:52:54:00:b1:c0:4b Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:NoKubernetes-015933 Clientid:01:52:54:00:b1:c0:4b}
I0307 18:43:25.348266 35824 main.go:141] libmachine: (NoKubernetes-015933) DBG | domain NoKubernetes-015933 has defined IP address 192.168.50.31 and MAC address 52:54:00:b1:c0:4b in network mk-NoKubernetes-015933
I0307 18:43:25.348530 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetSSHPort
I0307 18:43:25.348790 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetSSHKeyPath
I0307 18:43:25.348988 35824 main.go:141] libmachine: (NoKubernetes-015933) Calling .GetSSHUsername
I0307 18:43:25.349156 35824 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4059/.minikube/machines/NoKubernetes-015933/id_rsa Username:docker}
*
* ==> Docker <==
* -- Journal begins at Tue 2023-03-07 18:40:45 UTC, ends at Tue 2023-03-07 18:43:27 UTC. --
Mar 07 18:42:59 pause-763583 dockerd[4819]: time="2023-03-07T18:42:59.192757297Z" level=warning msg="cleanup warnings time=\"2023-03-07T18:42:59Z\" level=info msg=\"starting signal loop\" namespace=moby pid=7172 runtime=io.containerd.runc.v2\n"
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.718500300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.718580136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.718594747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.718982694Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/15dec3a5065b9255570a391fe7f4609698cee47f222cdd9bff9cecd408da96c8 pid=7424 runtime=io.containerd.runc.v2
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.724329662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.724400358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.724413596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.724910735Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/eef128330f346ca57cefd50e56f7cbce02bd3f4611a308589687632ba40a8600 pid=7433 runtime=io.containerd.runc.v2
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.725542032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.725630222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.725645855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 07 18:43:01 pause-763583 dockerd[4819]: time="2023-03-07T18:43:01.726324109Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c27e397326f6378658183d75811890287d946bda3ad9346b87533febea041cd0 pid=7431 runtime=io.containerd.runc.v2
Mar 07 18:43:10 pause-763583 dockerd[4819]: time="2023-03-07T18:43:10.245231597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 07 18:43:10 pause-763583 dockerd[4819]: time="2023-03-07T18:43:10.245387123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 07 18:43:10 pause-763583 dockerd[4819]: time="2023-03-07T18:43:10.245405130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 07 18:43:10 pause-763583 dockerd[4819]: time="2023-03-07T18:43:10.246139895Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8ef158124e5c02451279de07c2a084c4f41fae664112881c1c1c8a56f19a9872 pid=7658 runtime=io.containerd.runc.v2
Mar 07 18:43:10 pause-763583 dockerd[4819]: time="2023-03-07T18:43:10.590977589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 07 18:43:10 pause-763583 dockerd[4819]: time="2023-03-07T18:43:10.591026808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 07 18:43:10 pause-763583 dockerd[4819]: time="2023-03-07T18:43:10.591036828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 07 18:43:10 pause-763583 dockerd[4819]: time="2023-03-07T18:43:10.591478968Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ef4bf961fbfdd619f6bcecbaa87c34e145149737820f1c46458cc1bb3422732e pid=7705 runtime=io.containerd.runc.v2
Mar 07 18:43:11 pause-763583 dockerd[4819]: time="2023-03-07T18:43:11.471425959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 07 18:43:11 pause-763583 dockerd[4819]: time="2023-03-07T18:43:11.471497895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 07 18:43:11 pause-763583 dockerd[4819]: time="2023-03-07T18:43:11.471512951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 07 18:43:11 pause-763583 dockerd[4819]: time="2023-03-07T18:43:11.472425862Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f1b9cfc7922e6981f0557d4d54467dc5c8ce1c88fbb2f9d4046cadc922f9e726 pid=7938 runtime=io.containerd.runc.v2
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
f1b9cfc7922e6 5185b96f0becf 16 seconds ago Running coredns 2 8ef158124e5c0
ef4bf961fbfdd 6f64e7135a6ec 17 seconds ago Running kube-proxy 3 fe3ddcbb103c2
c27e397326f63 240e201d5b0d8 26 seconds ago Running kube-controller-manager 3 50f7a2d79d848
eef128330f346 fce326961ae2d 26 seconds ago Running etcd 3 345089ed33275
15dec3a5065b9 db8f409d9a5d7 26 seconds ago Running kube-scheduler 3 309302fffd507
5165906d51912 63d3239c3c159 31 seconds ago Running kube-apiserver 2 59dd8031423e4
94878c02897cd 240e201d5b0d8 42 seconds ago Exited kube-controller-manager 2 2bd1468d967e5
6e5a6ab1db374 fce326961ae2d 45 seconds ago Exited etcd 2 764a0fa4725b8
ada79eb25afea 6f64e7135a6ec 45 seconds ago Exited kube-proxy 2 4b088e44e1281
807b657d81c5a db8f409d9a5d7 56 seconds ago Exited kube-scheduler 2 373afa3584ae4
c6e309b2a1410 5185b96f0becf About a minute ago Exited coredns 1 1aa5eca48ed3d
323901da5efdc 63d3239c3c159 About a minute ago Exited kube-apiserver 1 0a00ef3151aa2
*
* ==> coredns [c6e309b2a141] <==
* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] 127.0.0.1:38272 - 10409 "HINFO IN 953780146248982216.8378533669120952003. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020585193s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
*
* ==> coredns [f1b9cfc7922e] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] 127.0.0.1:45070 - 47542 "HINFO IN 5933985263124533339.8025083633949818112. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032920062s
*
* ==> describe nodes <==
* Name: pause-763583
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-763583
kubernetes.io/os=linux
minikube.k8s.io/commit=592b1e9939a898d806f69aad174a19c45f317df1
minikube.k8s.io/name=pause-763583
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_03_07T18_41_29_0700
minikube.k8s.io/version=v1.29.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 07 Mar 2023 18:41:25 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-763583
AcquireTime: <unset>
RenewTime: Tue, 07 Mar 2023 18:43:18 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 07 Mar 2023 18:43:08 +0000 Tue, 07 Mar 2023 18:41:21 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 07 Mar 2023 18:43:08 +0000 Tue, 07 Mar 2023 18:41:21 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 07 Mar 2023 18:43:08 +0000 Tue, 07 Mar 2023 18:41:21 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 07 Mar 2023 18:43:08 +0000 Tue, 07 Mar 2023 18:41:30 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.61.47
Hostname: pause-763583
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017420Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017420Ki
pods: 110
System Info:
Machine ID: 96f2b5e4734a42f69e84fd4020108855
System UUID: 96f2b5e4-734a-42f6-9e84-fd4020108855
Boot ID: d94b0328-5b2d-4150-b356-9094c7a09c6e
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.23
Kubelet Version: v1.26.2
Kube-Proxy Version: v1.26.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-787d4945fb-n77tj 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 105s
kube-system etcd-pause-763583 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 118s
kube-system kube-apiserver-pause-763583 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 118s
kube-system kube-controller-manager-pause-763583 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m1s
kube-system kube-proxy-89rb5 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 106s
kube-system kube-scheduler-pause-763583 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m1s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 103s kube-proxy
Normal Starting 16s kube-proxy
Normal Starting 69s kube-proxy
Normal NodeHasSufficientPID 2m12s (x5 over 2m12s) kubelet Node pause-763583 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 2m12s (x5 over 2m12s) kubelet Node pause-763583 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 2m12s (x6 over 2m12s) kubelet Node pause-763583 status is now: NodeHasSufficientMemory
Normal Starting 118s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 118s kubelet Node pause-763583 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 118s kubelet Node pause-763583 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 118s kubelet Node pause-763583 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 118s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 117s kubelet Node pause-763583 status is now: NodeReady
Normal RegisteredNode 106s node-controller Node pause-763583 event: Registered Node pause-763583 in Controller
Normal Starting 27s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 26s (x8 over 26s) kubelet Node pause-763583 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 26s (x8 over 26s) kubelet Node pause-763583 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 26s (x7 over 26s) kubelet Node pause-763583 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 26s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 7s node-controller Node pause-763583 event: Registered Node pause-763583 in Controller
*
* ==> dmesg <==
* [ +0.357721] systemd-fstab-generator[894]: Ignoring "noauto" for root device
[ +0.250125] systemd-fstab-generator[931]: Ignoring "noauto" for root device
[ +0.127377] systemd-fstab-generator[942]: Ignoring "noauto" for root device
[ +0.130005] systemd-fstab-generator[955]: Ignoring "noauto" for root device
[ +1.487189] systemd-fstab-generator[1103]: Ignoring "noauto" for root device
[ +0.116407] systemd-fstab-generator[1114]: Ignoring "noauto" for root device
[ +0.106234] systemd-fstab-generator[1125]: Ignoring "noauto" for root device
[ +0.119898] systemd-fstab-generator[1136]: Ignoring "noauto" for root device
[ +4.465694] systemd-fstab-generator[1385]: Ignoring "noauto" for root device
[ +0.661318] kauditd_printk_skb: 68 callbacks suppressed
[ +13.201981] systemd-fstab-generator[2399]: Ignoring "noauto" for root device
[ +15.296940] kauditd_printk_skb: 8 callbacks suppressed
[ +6.493699] kauditd_printk_skb: 26 callbacks suppressed
[Mar 7 18:42] systemd-fstab-generator[3884]: Ignoring "noauto" for root device
[ +0.261566] systemd-fstab-generator[3915]: Ignoring "noauto" for root device
[ +0.137940] systemd-fstab-generator[3926]: Ignoring "noauto" for root device
[ +0.162289] systemd-fstab-generator[3939]: Ignoring "noauto" for root device
[ +1.297927] kauditd_printk_skb: 2 callbacks suppressed
[ +11.568969] systemd-fstab-generator[5238]: Ignoring "noauto" for root device
[ +0.133230] systemd-fstab-generator[5254]: Ignoring "noauto" for root device
[ +0.170384] systemd-fstab-generator[5309]: Ignoring "noauto" for root device
[ +0.203344] systemd-fstab-generator[5363]: Ignoring "noauto" for root device
[ +1.382359] kauditd_printk_skb: 32 callbacks suppressed
[ +5.276600] kauditd_printk_skb: 3 callbacks suppressed
[Mar 7 18:43] systemd-fstab-generator[7254]: Ignoring "noauto" for root device
*
* ==> etcd [6e5a6ab1db37] <==
* {"level":"info","ts":"2023-03-07T18:42:42.952Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-03-07T18:42:42.952Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.61.47:2380"}
{"level":"info","ts":"2023-03-07T18:42:42.953Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.61.47:2380"}
{"level":"info","ts":"2023-03-07T18:42:42.953Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"bbb11fcb00d21a09","initial-advertise-peer-urls":["https://192.168.61.47:2380"],"listen-peer-urls":["https://192.168.61.47:2380"],"advertise-client-urls":["https://192.168.61.47:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.47:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-03-07T18:42:42.953Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-03-07T18:42:44.735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 is starting a new election at term 3"}
{"level":"info","ts":"2023-03-07T18:42:44.735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 became pre-candidate at term 3"}
{"level":"info","ts":"2023-03-07T18:42:44.735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 received MsgPreVoteResp from bbb11fcb00d21a09 at term 3"}
{"level":"info","ts":"2023-03-07T18:42:44.735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 became candidate at term 4"}
{"level":"info","ts":"2023-03-07T18:42:44.735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 received MsgVoteResp from bbb11fcb00d21a09 at term 4"}
{"level":"info","ts":"2023-03-07T18:42:44.735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 became leader at term 4"}
{"level":"info","ts":"2023-03-07T18:42:44.735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bbb11fcb00d21a09 elected leader bbb11fcb00d21a09 at term 4"}
{"level":"info","ts":"2023-03-07T18:42:44.741Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"bbb11fcb00d21a09","local-member-attributes":"{Name:pause-763583 ClientURLs:[https://192.168.61.47:2379]}","request-path":"/0/members/bbb11fcb00d21a09/attributes","cluster-id":"d13a567fb8903787","publish-timeout":"7s"}
{"level":"info","ts":"2023-03-07T18:42:44.741Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-07T18:42:44.741Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-03-07T18:42:44.741Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-03-07T18:42:44.741Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-07T18:42:44.742Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-03-07T18:42:44.742Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.61.47:2379"}
{"level":"info","ts":"2023-03-07T18:42:54.173Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2023-03-07T18:42:54.173Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-763583","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.47:2380"],"advertise-client-urls":["https://192.168.61.47:2379"]}
{"level":"info","ts":"2023-03-07T18:42:54.179Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"bbb11fcb00d21a09","current-leader-member-id":"bbb11fcb00d21a09"}
{"level":"info","ts":"2023-03-07T18:42:54.183Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.61.47:2380"}
{"level":"info","ts":"2023-03-07T18:42:54.185Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.61.47:2380"}
{"level":"info","ts":"2023-03-07T18:42:54.185Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-763583","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.47:2380"],"advertise-client-urls":["https://192.168.61.47:2379"]}
*
* ==> etcd [eef128330f34] <==
* {"level":"info","ts":"2023-03-07T18:43:02.868Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2023-03-07T18:43:02.868Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2023-03-07T18:43:02.869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 switched to configuration voters=(13524626112722901513)"}
{"level":"info","ts":"2023-03-07T18:43:02.869Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d13a567fb8903787","local-member-id":"bbb11fcb00d21a09","added-peer-id":"bbb11fcb00d21a09","added-peer-peer-urls":["https://192.168.61.47:2380"]}
{"level":"info","ts":"2023-03-07T18:43:02.870Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d13a567fb8903787","local-member-id":"bbb11fcb00d21a09","cluster-version":"3.5"}
{"level":"info","ts":"2023-03-07T18:43:02.870Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-03-07T18:43:02.877Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-03-07T18:43:02.878Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"bbb11fcb00d21a09","initial-advertise-peer-urls":["https://192.168.61.47:2380"],"listen-peer-urls":["https://192.168.61.47:2380"],"advertise-client-urls":["https://192.168.61.47:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.47:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-03-07T18:43:02.878Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-03-07T18:43:02.878Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.61.47:2380"}
{"level":"info","ts":"2023-03-07T18:43:02.878Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.61.47:2380"}
{"level":"info","ts":"2023-03-07T18:43:03.811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 is starting a new election at term 4"}
{"level":"info","ts":"2023-03-07T18:43:03.811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 became pre-candidate at term 4"}
{"level":"info","ts":"2023-03-07T18:43:03.811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 received MsgPreVoteResp from bbb11fcb00d21a09 at term 4"}
{"level":"info","ts":"2023-03-07T18:43:03.811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 became candidate at term 5"}
{"level":"info","ts":"2023-03-07T18:43:03.811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 received MsgVoteResp from bbb11fcb00d21a09 at term 5"}
{"level":"info","ts":"2023-03-07T18:43:03.811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbb11fcb00d21a09 became leader at term 5"}
{"level":"info","ts":"2023-03-07T18:43:03.811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bbb11fcb00d21a09 elected leader bbb11fcb00d21a09 at term 5"}
{"level":"info","ts":"2023-03-07T18:43:03.820Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"bbb11fcb00d21a09","local-member-attributes":"{Name:pause-763583 ClientURLs:[https://192.168.61.47:2379]}","request-path":"/0/members/bbb11fcb00d21a09/attributes","cluster-id":"d13a567fb8903787","publish-timeout":"7s"}
{"level":"info","ts":"2023-03-07T18:43:03.820Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-07T18:43:03.822Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-03-07T18:43:03.823Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-07T18:43:03.824Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.61.47:2379"}
{"level":"info","ts":"2023-03-07T18:43:03.834Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-03-07T18:43:03.834Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
*
* ==> kernel <==
* 18:43:27 up 2 min, 0 users, load average: 1.29, 0.64, 0.25
Linux pause-763583 5.10.57 #1 SMP Fri Feb 24 23:00:41 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [323901da5efd] <==
* W0307 18:42:36.521954 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0307 18:42:40.161722 1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0307 18:42:42.409323 1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
E0307 18:42:46.204942 1 run.go:74] "command failed" err="context deadline exceeded"
*
* ==> kube-apiserver [5165906d5191] <==
* I0307 18:43:08.310421 1 establishing_controller.go:76] Starting EstablishingController
I0307 18:43:08.310769 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0307 18:43:08.311116 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0307 18:43:08.311363 1 crd_finalizer.go:266] Starting CRDFinalizer
I0307 18:43:08.388060 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0307 18:43:08.388105 1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
I0307 18:43:08.538926 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0307 18:43:08.545559 1 shared_informer.go:280] Caches are synced for node_authorizer
I0307 18:43:08.588540 1 shared_informer.go:280] Caches are synced for crd-autoregister
I0307 18:43:08.588756 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0307 18:43:08.589259 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0307 18:43:08.590662 1 apf_controller.go:366] Running API Priority and Fairness config worker
I0307 18:43:08.590761 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
I0307 18:43:08.592785 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
I0307 18:43:08.595945 1 shared_informer.go:280] Caches are synced for configmaps
I0307 18:43:08.600179 1 cache.go:39] Caches are synced for autoregister controller
I0307 18:43:08.916211 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0307 18:43:09.302398 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0307 18:43:10.111176 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0307 18:43:10.153214 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0307 18:43:10.237420 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0307 18:43:10.310949 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0307 18:43:10.338435 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0307 18:43:20.873484 1 controller.go:615] quota admission added evaluator for: endpoints
I0307 18:43:20.904943 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
*
* ==> kube-controller-manager [94878c02897c] <==
* I0307 18:42:46.458769 1 serving.go:348] Generated self-signed cert in-memory
I0307 18:42:46.783654 1 controllermanager.go:182] Version: v1.26.2
I0307 18:42:46.783860 1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0307 18:42:46.785151 1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0307 18:42:46.785243 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0307 18:42:46.785176 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0307 18:42:46.785484 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
*
* ==> kube-controller-manager [c27e397326f6] <==
* I0307 18:43:20.873857 1 shared_informer.go:280] Caches are synced for cronjob
I0307 18:43:20.878314 1 shared_informer.go:280] Caches are synced for bootstrap_signer
I0307 18:43:20.882146 1 shared_informer.go:280] Caches are synced for expand
I0307 18:43:20.885718 1 shared_informer.go:280] Caches are synced for ephemeral
I0307 18:43:20.886037 1 shared_informer.go:280] Caches are synced for attach detach
I0307 18:43:20.888299 1 shared_informer.go:280] Caches are synced for HPA
I0307 18:43:20.888582 1 shared_informer.go:280] Caches are synced for PV protection
I0307 18:43:20.892743 1 shared_informer.go:280] Caches are synced for TTL
I0307 18:43:20.897939 1 shared_informer.go:280] Caches are synced for taint
I0307 18:43:20.898413 1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone:
W0307 18:43:20.898804 1 node_lifecycle_controller.go:1053] Missing timestamp for Node pause-763583. Assuming now as a timestamp.
I0307 18:43:20.899043 1 node_lifecycle_controller.go:1254] Controller detected that zone is now in state Normal.
I0307 18:43:20.899481 1 taint_manager.go:206] "Starting NoExecuteTaintManager"
I0307 18:43:20.899747 1 taint_manager.go:211] "Sending events to api server"
I0307 18:43:20.900134 1 event.go:294] "Event occurred" object="pause-763583" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-763583 event: Registered Node pause-763583 in Controller"
I0307 18:43:20.903032 1 shared_informer.go:280] Caches are synced for persistent volume
I0307 18:43:20.915987 1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring
I0307 18:43:20.959796 1 shared_informer.go:280] Caches are synced for disruption
I0307 18:43:20.974716 1 shared_informer.go:280] Caches are synced for deployment
I0307 18:43:20.975623 1 shared_informer.go:280] Caches are synced for ReplicaSet
I0307 18:43:21.027112 1 shared_informer.go:280] Caches are synced for resource quota
I0307 18:43:21.097864 1 shared_informer.go:280] Caches are synced for resource quota
I0307 18:43:21.435270 1 shared_informer.go:280] Caches are synced for garbage collector
I0307 18:43:21.443253 1 shared_informer.go:280] Caches are synced for garbage collector
I0307 18:43:21.443274 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-proxy [ada79eb25afe] <==
* E0307 18:42:47.219325 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-763583": dial tcp 192.168.61.47:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.47:35132->192.168.61.47:8443: read: connection reset by peer
E0307 18:42:48.393244 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-763583": dial tcp 192.168.61.47:8443: connect: connection refused
E0307 18:42:50.438279 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-763583": dial tcp 192.168.61.47:8443: connect: connection refused
*
* ==> kube-proxy [ef4bf961fbfd] <==
* I0307 18:43:10.797523 1 node.go:163] Successfully retrieved node IP: 192.168.61.47
I0307 18:43:10.797896 1 server_others.go:109] "Detected node IP" address="192.168.61.47"
I0307 18:43:10.798039 1 server_others.go:535] "Using iptables proxy"
I0307 18:43:10.849939 1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0307 18:43:10.849958 1 server_others.go:176] "Using iptables Proxier"
I0307 18:43:10.850013 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0307 18:43:10.850263 1 server.go:655] "Version info" version="v1.26.2"
I0307 18:43:10.850271 1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0307 18:43:10.852006 1 config.go:317] "Starting service config controller"
I0307 18:43:10.852017 1 shared_informer.go:273] Waiting for caches to sync for service config
I0307 18:43:10.852036 1 config.go:226] "Starting endpoint slice config controller"
I0307 18:43:10.852039 1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
I0307 18:43:10.852391 1 config.go:444] "Starting node config controller"
I0307 18:43:10.852397 1 shared_informer.go:273] Waiting for caches to sync for node config
I0307 18:43:10.952964 1 shared_informer.go:280] Caches are synced for node config
I0307 18:43:10.953166 1 shared_informer.go:280] Caches are synced for endpoint slice config
I0307 18:43:10.953191 1 shared_informer.go:280] Caches are synced for service config
*
* ==> kube-scheduler [15dec3a5065b] <==
* W0307 18:43:08.507261 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0307 18:43:08.507326 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0307 18:43:08.512853 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0307 18:43:08.513008 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0307 18:43:08.513306 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0307 18:43:08.513504 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0307 18:43:08.513908 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0307 18:43:08.515006 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0307 18:43:08.515568 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0307 18:43:08.515621 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0307 18:43:08.516261 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0307 18:43:08.516317 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0307 18:43:08.516592 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0307 18:43:08.516640 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0307 18:43:08.517124 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0307 18:43:08.517191 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0307 18:43:08.517501 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0307 18:43:08.517560 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0307 18:43:08.517822 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0307 18:43:08.517862 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0307 18:43:08.518027 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0307 18:43:08.518205 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0307 18:43:08.521904 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0307 18:43:08.522043 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
I0307 18:43:09.589200 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [807b657d81c5] <==
* W0307 18:42:51.071495 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.61.47:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0307 18:42:51.071539 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.61.47:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
W0307 18:42:51.104237 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0307 18:42:51.104278 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
W0307 18:42:51.192505 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0307 18:42:51.192580 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
W0307 18:42:51.250181 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0307 18:42:51.250225 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
W0307 18:42:51.381925 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.61.47:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0307 18:42:51.381970 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.61.47:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
W0307 18:42:51.462136 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.61.47:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0307 18:42:51.462172 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.61.47:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
W0307 18:42:51.562118 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.61.47:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0307 18:42:51.562171 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.61.47:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
W0307 18:42:51.637946 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.61.47:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0307 18:42:51.638063 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.61.47:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
W0307 18:42:51.694375 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.61.47:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0307 18:42:51.694442 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.61.47:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
W0307 18:42:54.148269 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.61.47:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0307 18:42:54.148330 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.61.47:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
I0307 18:42:54.194237 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
I0307 18:42:54.194515 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0307 18:42:54.194750 1 shared_informer.go:276] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0307 18:42:54.194764 1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
E0307 18:42:54.195288 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kubelet <==
* -- Journal begins at Tue 2023-03-07 18:40:45 UTC, ends at Tue 2023-03-07 18:43:28 UTC. --
Mar 07 18:43:01 pause-763583 kubelet[7260]: I0307 18:43:01.355102 7260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c98952be90afd76e358cf45d199ab1e-k8s-certs\") pod \"kube-controller-manager-pause-763583\" (UID: \"8c98952be90afd76e358cf45d199ab1e\") " pod="kube-system/kube-controller-manager-pause-763583"
Mar 07 18:43:01 pause-763583 kubelet[7260]: I0307 18:43:01.355150 7260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c98952be90afd76e358cf45d199ab1e-kubeconfig\") pod \"kube-controller-manager-pause-763583\" (UID: \"8c98952be90afd76e358cf45d199ab1e\") " pod="kube-system/kube-controller-manager-pause-763583"
Mar 07 18:43:01 pause-763583 kubelet[7260]: I0307 18:43:01.355215 7260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c98952be90afd76e358cf45d199ab1e-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-763583\" (UID: \"8c98952be90afd76e358cf45d199ab1e\") " pod="kube-system/kube-controller-manager-pause-763583"
Mar 07 18:43:01 pause-763583 kubelet[7260]: I0307 18:43:01.355269 7260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c98952be90afd76e358cf45d199ab1e-ca-certs\") pod \"kube-controller-manager-pause-763583\" (UID: \"8c98952be90afd76e358cf45d199ab1e\") " pod="kube-system/kube-controller-manager-pause-763583"
Mar 07 18:43:01 pause-763583 kubelet[7260]: I0307 18:43:01.543827 7260 scope.go:115] "RemoveContainer" containerID="807b657d81c5ae3073c6f68f516057e0eae61acc433516c80f3bb9012955718d"
Mar 07 18:43:01 pause-763583 kubelet[7260]: I0307 18:43:01.557775 7260 scope.go:115] "RemoveContainer" containerID="6e5a6ab1db37433428780215d9fa2f4e75c85f05e012cda4b5b5aeb1eb7a2ec9"
Mar 07 18:43:01 pause-763583 kubelet[7260]: I0307 18:43:01.592111 7260 scope.go:115] "RemoveContainer" containerID="94878c02897cd0b600b698a111410b78a3316213e64110ed9311fa5516a61a2a"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.630105 7260 kubelet_node_status.go:108] "Node was previously registered" node="pause-763583"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.631113 7260 kubelet_node_status.go:73] "Successfully registered node" node="pause-763583"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.634873 7260 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.636591 7260 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.828530 7260 apiserver.go:52] "Watching apiserver"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.832912 7260 topology_manager.go:210] "Topology Admit Handler"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.833153 7260 topology_manager.go:210] "Topology Admit Handler"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.848920 7260 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.918995 7260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdxkx\" (UniqueName: \"kubernetes.io/projected/1976b181-14ab-48a2-bb64-2eb3b1ecf436-kube-api-access-wdxkx\") pod \"kube-proxy-89rb5\" (UID: \"1976b181-14ab-48a2-bb64-2eb3b1ecf436\") " pod="kube-system/kube-proxy-89rb5"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.919768 7260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1976b181-14ab-48a2-bb64-2eb3b1ecf436-kube-proxy\") pod \"kube-proxy-89rb5\" (UID: \"1976b181-14ab-48a2-bb64-2eb3b1ecf436\") " pod="kube-system/kube-proxy-89rb5"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.920110 7260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1976b181-14ab-48a2-bb64-2eb3b1ecf436-lib-modules\") pod \"kube-proxy-89rb5\" (UID: \"1976b181-14ab-48a2-bb64-2eb3b1ecf436\") " pod="kube-system/kube-proxy-89rb5"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.920464 7260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e63f9141-89ed-4e4d-b1aa-86ad76074f81-config-volume\") pod \"coredns-787d4945fb-n77tj\" (UID: \"e63f9141-89ed-4e4d-b1aa-86ad76074f81\") " pod="kube-system/coredns-787d4945fb-n77tj"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.920777 7260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1976b181-14ab-48a2-bb64-2eb3b1ecf436-xtables-lock\") pod \"kube-proxy-89rb5\" (UID: \"1976b181-14ab-48a2-bb64-2eb3b1ecf436\") " pod="kube-system/kube-proxy-89rb5"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.921139 7260 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nw72\" (UniqueName: \"kubernetes.io/projected/e63f9141-89ed-4e4d-b1aa-86ad76074f81-kube-api-access-2nw72\") pod \"coredns-787d4945fb-n77tj\" (UID: \"e63f9141-89ed-4e4d-b1aa-86ad76074f81\") " pod="kube-system/coredns-787d4945fb-n77tj"
Mar 07 18:43:08 pause-763583 kubelet[7260]: I0307 18:43:08.921275 7260 reconciler.go:41] "Reconciler: start to sync state"
Mar 07 18:43:10 pause-763583 kubelet[7260]: I0307 18:43:10.049082 7260 request.go:690] Waited for 1.021929352s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token
Mar 07 18:43:10 pause-763583 kubelet[7260]: I0307 18:43:10.339903 7260 scope.go:115] "RemoveContainer" containerID="ada79eb25afeafa814e89c049a7d167866ebe9d2b5feba46d73d8463af7416fb"
Mar 07 18:43:11 pause-763583 kubelet[7260]: I0307 18:43:11.289083 7260 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ef158124e5c02451279de07c2a084c4f41fae664112881c1c1c8a56f19a9872"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-763583 -n pause-763583
helpers_test.go:261: (dbg) Run: kubectl --context pause-763583 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (97.00s)