=== RUN TestPause/serial/SecondStartNoReconfiguration
pause_test.go:90: (dbg) Run: out/minikube-linux-amd64 start -p pause-20210915203607-209669 --alsologtostderr -v=1 --driver=kvm2 --container-runtime=containerd
E0915 20:38:23.393663 209669 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/addons-20210915195018-209669/client.crt: no such file or directory
E0915 20:38:43.191763 209669 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/functional-20210915195742-209669/client.crt: no such file or directory
=== CONT TestPause/serial/SecondStartNoReconfiguration
pause_test.go:90: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210915203607-209669 --alsologtostderr -v=1 --driver=kvm2 --container-runtime=containerd: (1m9.346645863s)
pause_test.go:98: expected the second start log output to include "The running cluster does not require reconfiguration" but got:
-- stdout --
* [pause-20210915203607-209669] minikube v1.23.0 on Debian 9.13 (kvm/amd64)
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/kubeconfig
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube
- MINIKUBE_LOCATION=12425
* Using the kvm2 driver based on existing profile
* Starting control plane node pause-20210915203607-209669 in cluster pause-20210915203607-209669
* Updating the running kvm2 "pause-20210915203607-209669" VM ...
* Preparing Kubernetes v1.22.1 on containerd 1.4.9 ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
- Want kubectl v1.22.1? Try 'minikube kubectl -- get pods -A'
* Done! kubectl is now configured to use "pause-20210915203607-209669" cluster and "default" namespace by default
-- /stdout --
** stderr **
I0915 20:38:07.190339 245575 out.go:298] Setting OutFile to fd 1 ...
I0915 20:38:07.190433 245575 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0915 20:38:07.190438 245575 out.go:311] Setting ErrFile to fd 2...
I0915 20:38:07.190444 245575 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0915 20:38:07.190597 245575 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/bin
I0915 20:38:07.190874 245575 out.go:305] Setting JSON to false
I0915 20:38:07.243790 245575 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":19250,"bootTime":1631719038,"procs":186,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
I0915 20:38:07.243914 245575 start.go:121] virtualization: kvm guest
I0915 20:38:07.246662 245575 out.go:177] * [pause-20210915203607-209669] minikube v1.23.0 on Debian 9.13 (kvm/amd64)
I0915 20:38:07.246809 245575 notify.go:169] Checking for updates...
I0915 20:38:07.248394 245575 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/kubeconfig
I0915 20:38:07.249868 245575 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0915 20:38:07.252069 245575 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube
I0915 20:38:07.253596 245575 out.go:177] - MINIKUBE_LOCATION=12425
I0915 20:38:07.254100 245575 config.go:177] Loaded profile config "pause-20210915203607-209669": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.1
I0915 20:38:07.254668 245575 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0915 20:38:07.254725 245575 main.go:130] libmachine: Launching plugin server for driver kvm2
I0915 20:38:07.270264 245575 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43377
I0915 20:38:07.270950 245575 main.go:130] libmachine: () Calling .GetVersion
I0915 20:38:07.271630 245575 main.go:130] libmachine: Using API Version 1
I0915 20:38:07.271654 245575 main.go:130] libmachine: () Calling .SetConfigRaw
I0915 20:38:07.272001 245575 main.go:130] libmachine: () Calling .GetMachineName
I0915 20:38:07.272194 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .DriverName
I0915 20:38:07.272587 245575 driver.go:343] Setting default libvirt URI to qemu:///system
I0915 20:38:07.273036 245575 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0915 20:38:07.273082 245575 main.go:130] libmachine: Launching plugin server for driver kvm2
I0915 20:38:07.287777 245575 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40265
I0915 20:38:07.288602 245575 main.go:130] libmachine: () Calling .GetVersion
I0915 20:38:07.289234 245575 main.go:130] libmachine: Using API Version 1
I0915 20:38:07.289264 245575 main.go:130] libmachine: () Calling .SetConfigRaw
I0915 20:38:07.289679 245575 main.go:130] libmachine: () Calling .GetMachineName
I0915 20:38:07.289913 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .DriverName
I0915 20:38:07.346069 245575 out.go:177] * Using the kvm2 driver based on existing profile
I0915 20:38:07.346108 245575 start.go:278] selected driver: kvm2
I0915 20:38:07.346116 245575 start.go:751] validating driver "kvm2" against &{Name:pause-20210915203607-209669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12425/minikube-v1.23.0-1631662909-12425.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.22.1 ClusterName:pause-20210915203607-209669 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0915 20:38:07.346238 245575 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0915 20:38:07.347012 245575 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0915 20:38:07.347226 245575 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0915 20:38:07.361751 245575 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.23.0
I0915 20:38:07.362713 245575 cni.go:93] Creating CNI manager for ""
I0915 20:38:07.362730 245575 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
I0915 20:38:07.362741 245575 start_flags.go:278] config:
{Name:pause-20210915203607-209669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12425/minikube-v1.23.0-1631662909-12425.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:pause-20210915203607-209669 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0915 20:38:07.362897 245575 iso.go:123] acquiring lock: {Name:mk297a0af7a5c0740af600c0c91a5b7e9ddafd38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0915 20:38:07.365048 245575 out.go:177] * Starting control plane node pause-20210915203607-209669 in cluster pause-20210915203607-209669
I0915 20:38:07.365073 245575 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime containerd
I0915 20:38:07.365111 245575 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.22.1-containerd-overlay2-amd64.tar.lz4
I0915 20:38:07.365125 245575 cache.go:57] Caching tarball of preloaded images
I0915 20:38:07.365226 245575 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.22.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0915 20:38:07.365245 245575 cache.go:60] Finished verifying existence of preloaded tar for v1.22.1 on containerd
I0915 20:38:07.365364 245575 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/pause-20210915203607-209669/config.json ...
I0915 20:38:07.365562 245575 cache.go:206] Successfully downloaded all kic artifacts
I0915 20:38:07.365589 245575 start.go:313] acquiring machines lock for pause-20210915203607-209669: {Name:mk02ff60ae5e10e39476a23d3a5c6dd42c42335e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0915 20:38:33.059785 245575 start.go:317] acquired machines lock for "pause-20210915203607-209669" in 25.694173203s
I0915 20:38:33.059835 245575 start.go:93] Skipping create...Using existing machine configuration
I0915 20:38:33.059845 245575 fix.go:55] fixHost starting:
I0915 20:38:33.060309 245575 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0915 20:38:33.060364 245575 main.go:130] libmachine: Launching plugin server for driver kvm2
I0915 20:38:33.075374 245575 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45711
I0915 20:38:33.080569 245575 main.go:130] libmachine: () Calling .GetVersion
I0915 20:38:33.081181 245575 main.go:130] libmachine: Using API Version 1
I0915 20:38:33.081211 245575 main.go:130] libmachine: () Calling .SetConfigRaw
I0915 20:38:33.081976 245575 main.go:130] libmachine: () Calling .GetMachineName
I0915 20:38:33.082383 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .DriverName
I0915 20:38:33.082548 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetState
I0915 20:38:33.087788 245575 fix.go:108] recreateIfNeeded on pause-20210915203607-209669: state=Running err=<nil>
W0915 20:38:33.087817 245575 fix.go:134] unexpected machine state, will restart: <nil>
I0915 20:38:33.089654 245575 out.go:177] * Updating the running kvm2 "pause-20210915203607-209669" VM ...
I0915 20:38:33.089686 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .DriverName
I0915 20:38:33.090157 245575 machine.go:88] provisioning docker machine ...
I0915 20:38:33.090205 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .DriverName
I0915 20:38:33.093295 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetMachineName
I0915 20:38:33.093470 245575 buildroot.go:166] provisioning hostname "pause-20210915203607-209669"
I0915 20:38:33.093492 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetMachineName
I0915 20:38:33.093655 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHHostname
I0915 20:38:33.099964 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:38:33.100337 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a4:51", ip: ""} in network mk-pause-20210915203607-209669: {Iface:virbr1 ExpiryTime:2021-09-15 21:36:22 +0000 UTC Type:0 Mac:52:54:00:69:a4:51 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:pause-20210915203607-209669 Clientid:01:52:54:00:69:a4:51}
I0915 20:38:33.100371 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined IP address 192.168.39.238 and MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:38:33.100714 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHPort
I0915 20:38:33.101040 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHKeyPath
I0915 20:38:33.101233 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHKeyPath
I0915 20:38:33.101382 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHUsername
I0915 20:38:33.101573 245575 main.go:130] libmachine: Using SSH client type: native
I0915 20:38:33.101840 245575 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1c40] 0x7a4d20 <nil> [] 0s} 192.168.39.238 22 <nil> <nil>}
I0915 20:38:33.101860 245575 main.go:130] libmachine: About to run SSH command:
sudo hostname pause-20210915203607-209669 && echo "pause-20210915203607-209669" | sudo tee /etc/hostname
I0915 20:38:33.310324 245575 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210915203607-209669
I0915 20:38:33.310361 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHHostname
I0915 20:38:33.316777 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:38:33.317143 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a4:51", ip: ""} in network mk-pause-20210915203607-209669: {Iface:virbr1 ExpiryTime:2021-09-15 21:36:22 +0000 UTC Type:0 Mac:52:54:00:69:a4:51 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:pause-20210915203607-209669 Clientid:01:52:54:00:69:a4:51}
I0915 20:38:33.317182 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined IP address 192.168.39.238 and MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:38:33.317334 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHPort
I0915 20:38:33.317537 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHKeyPath
I0915 20:38:33.317737 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHKeyPath
I0915 20:38:33.317890 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHUsername
I0915 20:38:33.318083 245575 main.go:130] libmachine: Using SSH client type: native
I0915 20:38:33.318248 245575 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1c40] 0x7a4d20 <nil> [] 0s} 192.168.39.238 22 <nil> <nil>}
I0915 20:38:33.318297 245575 main.go:130] libmachine: About to run SSH command:
if ! grep -xq '.*\spause-20210915203607-209669' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210915203607-209669/g' /etc/hosts;
else
echo '127.0.1.1 pause-20210915203607-209669' | sudo tee -a /etc/hosts;
fi
fi
I0915 20:38:33.475159 245575 main.go:130] libmachine: SSH cmd err, output: <nil>:
I0915 20:38:33.475196 245575 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikub
e/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube}
I0915 20:38:33.475221 245575 buildroot.go:174] setting up certificates
I0915 20:38:33.475234 245575 provision.go:83] configureAuth start
I0915 20:38:33.475247 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetMachineName
I0915 20:38:33.475549 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetIP
I0915 20:38:33.482260 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:38:33.482758 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a4:51", ip: ""} in network mk-pause-20210915203607-209669: {Iface:virbr1 ExpiryTime:2021-09-15 21:36:22 +0000 UTC Type:0 Mac:52:54:00:69:a4:51 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:pause-20210915203607-209669 Clientid:01:52:54:00:69:a4:51}
I0915 20:38:33.482839 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined IP address 192.168.39.238 and MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:38:33.483043 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHHostname
I0915 20:38:33.489087 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:38:33.489550 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a4:51", ip: ""} in network mk-pause-20210915203607-209669: {Iface:virbr1 ExpiryTime:2021-09-15 21:36:22 +0000 UTC Type:0 Mac:52:54:00:69:a4:51 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:pause-20210915203607-209669 Clientid:01:52:54:00:69:a4:51}
I0915 20:38:33.489583 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined IP address 192.168.39.238 and MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:38:33.489947 245575 provision.go:138] copyHostCerts
I0915 20:38:33.490053 245575 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/ca.pem, removing ...
I0915 20:38:33.490096 245575 exec_runner.go:208] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/ca.pem
I0915 20:38:33.490198 245575 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/ca.pem (1078 bytes)
I0915 20:38:33.490369 245575 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cert.pem, removing ...
I0915 20:38:33.490391 245575 exec_runner.go:208] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cert.pem
I0915 20:38:33.490430 245575 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cert.pem (1123 bytes)
I0915 20:38:33.490582 245575 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/key.pem, removing ...
I0915 20:38:33.490595 245575 exec_runner.go:208] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/key.pem
I0915 20:38:33.490621 245575 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/key.pem (1679 bytes)
I0915 20:38:33.490683 245575 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/ca-key.pem org=jenkins.pause-20210915203607-209669 san=[192.168.39.238 192.168.39.238 localhost 127.0.0.1 minikube pause-20210915203607-209669]
I0915 20:38:33.596990 245575 provision.go:172] copyRemoteCerts
I0915 20:38:33.597085 245575 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0915 20:38:33.597120 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHHostname
I0915 20:38:33.603624 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:38:33.603981 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a4:51", ip: ""} in network mk-pause-20210915203607-209669: {Iface:virbr1 ExpiryTime:2021-09-15 21:36:22 +0000 UTC Type:0 Mac:52:54:00:69:a4:51 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:pause-20210915203607-209669 Clientid:01:52:54:00:69:a4:51}
I0915 20:38:33.604009 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined IP address 192.168.39.238 and MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:38:33.604318 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHPort
I0915 20:38:33.604522 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHKeyPath
I0915 20:38:33.604739 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHUsername
I0915 20:38:33.604876 245575 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/machines/pause-20210915203607-209669/id_rsa Username:docker}
I0915 20:38:33.749195 245575 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0915 20:38:33.796294 245575 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
I0915 20:38:33.863176 245575 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0915 20:38:33.908252 245575 provision.go:86] duration metric: configureAuth took 433.002948ms
I0915 20:38:33.908334 245575 buildroot.go:189] setting minikube options for container-runtime
I0915 20:38:33.908562 245575 config.go:177] Loaded profile config "pause-20210915203607-209669": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.1
I0915 20:38:33.908608 245575 machine.go:91] provisioned docker machine in 818.439077ms
I0915 20:38:33.908638 245575 start.go:267] post-start starting for "pause-20210915203607-209669" (driver="kvm2")
I0915 20:38:33.908653 245575 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0915 20:38:33.908687 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .DriverName
I0915 20:38:33.909011 245575 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0915 20:38:33.909048 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHHostname
I0915 20:38:33.915556 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:38:33.916002 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a4:51", ip: ""} in network mk-pause-20210915203607-209669: {Iface:virbr1 ExpiryTime:2021-09-15 21:36:22 +0000 UTC Type:0 Mac:52:54:00:69:a4:51 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:pause-20210915203607-209669 Clientid:01:52:54:00:69:a4:51}
I0915 20:38:33.916032 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined IP address 192.168.39.238 and MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:38:33.916360 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHPort
I0915 20:38:33.916558 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHKeyPath
I0915 20:38:33.916732 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHUsername
I0915 20:38:33.916887 245575 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/machines/pause-20210915203607-209669/id_rsa Username:docker}
I0915 20:38:34.040810 245575 ssh_runner.go:152] Run: cat /etc/os-release
I0915 20:38:34.052367 245575 info.go:137] Remote host: Buildroot 2021.02.4
I0915 20:38:34.052404 245575 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/addons for local assets ...
I0915 20:38:34.052472 245575 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/files for local assets ...
I0915 20:38:34.052560 245575 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/files/etc/ssl/certs/2096692.pem -> 2096692.pem in /etc/ssl/certs
I0915 20:38:34.052674 245575 ssh_runner.go:152] Run: sudo mkdir -p /etc/ssl/certs
I0915 20:38:34.069936 245575 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/files/etc/ssl/certs/2096692.pem --> /etc/ssl/certs/2096692.pem (1708 bytes)
I0915 20:38:34.115802 245575 start.go:270] post-start completed in 207.140736ms
I0915 20:38:34.115923 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .DriverName
I0915 20:38:34.116340 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHHostname
I0915 20:38:34.122843 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:38:34.123312 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a4:51", ip: ""} in network mk-pause-20210915203607-209669: {Iface:virbr1 ExpiryTime:2021-09-15 21:36:22 +0000 UTC Type:0 Mac:52:54:00:69:a4:51 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:pause-20210915203607-209669 Clientid:01:52:54:00:69:a4:51}
I0915 20:38:34.123387 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined IP address 192.168.39.238 and MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:38:34.123716 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHPort
I0915 20:38:34.123919 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHKeyPath
I0915 20:38:34.124099 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHKeyPath
I0915 20:38:34.124366 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHUsername
I0915 20:38:34.124608 245575 main.go:130] libmachine: Using SSH client type: native
I0915 20:38:34.124833 245575 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1c40] 0x7a4d20 <nil> [] 0s} 192.168.39.238 22 <nil> <nil>}
I0915 20:38:34.124861 245575 main.go:130] libmachine: About to run SSH command:
date +%s.%N
I0915 20:38:34.303571 245575 main.go:130] libmachine: SSH cmd err, output: <nil>: 1631738314.302663749
I0915 20:38:34.303601 245575 fix.go:212] guest clock: 1631738314.302663749
I0915 20:38:34.303615 245575 fix.go:225] Guest: 2021-09-15 20:38:34.302663749 +0000 UTC Remote: 2021-09-15 20:38:34.116315805 +0000 UTC m=+26.989106903 (delta=186.347944ms)
I0915 20:38:34.303661 245575 fix.go:196] guest clock delta is within tolerance: 186.347944ms
I0915 20:38:34.303669 245575 fix.go:57] fixHost completed within 1.243823891s
I0915 20:38:34.303677 245575 start.go:80] releasing machines lock for "pause-20210915203607-209669", held for 1.243861043s
I0915 20:38:34.303731 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .DriverName
I0915 20:38:34.304132 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetIP
I0915 20:38:34.311259 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:38:34.311753 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a4:51", ip: ""} in network mk-pause-20210915203607-209669: {Iface:virbr1 ExpiryTime:2021-09-15 21:36:22 +0000 UTC Type:0 Mac:52:54:00:69:a4:51 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:pause-20210915203607-209669 Clientid:01:52:54:00:69:a4:51}
I0915 20:38:34.311834 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined IP address 192.168.39.238 and MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:38:34.312248 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .DriverName
I0915 20:38:34.312448 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .DriverName
I0915 20:38:34.313082 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .DriverName
I0915 20:38:34.313383 245575 ssh_runner.go:152] Run: systemctl --version
I0915 20:38:34.313397 245575 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
I0915 20:38:34.313412 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHHostname
I0915 20:38:34.313452 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHHostname
I0915 20:38:34.321735 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:38:34.322285 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:38:34.322727 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a4:51", ip: ""} in network mk-pause-20210915203607-209669: {Iface:virbr1 ExpiryTime:2021-09-15 21:36:22 +0000 UTC Type:0 Mac:52:54:00:69:a4:51 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:pause-20210915203607-209669 Clientid:01:52:54:00:69:a4:51}
I0915 20:38:34.322753 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined IP address 192.168.39.238 and MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:38:34.322781 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a4:51", ip: ""} in network mk-pause-20210915203607-209669: {Iface:virbr1 ExpiryTime:2021-09-15 21:36:22 +0000 UTC Type:0 Mac:52:54:00:69:a4:51 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:pause-20210915203607-209669 Clientid:01:52:54:00:69:a4:51}
I0915 20:38:34.322805 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined IP address 192.168.39.238 and MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:38:34.322937 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHPort
I0915 20:38:34.323217 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHPort
I0915 20:38:34.323257 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHKeyPath
I0915 20:38:34.323478 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHKeyPath
I0915 20:38:34.323513 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHUsername
I0915 20:38:34.323632 245575 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/machines/pause-20210915203607-209669/id_rsa Username:docker}
I0915 20:38:34.323927 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHUsername
I0915 20:38:34.324124 245575 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/machines/pause-20210915203607-209669/id_rsa Username:docker}
I0915 20:38:34.574492 245575 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime containerd
I0915 20:38:34.574655 245575 ssh_runner.go:152] Run: sudo crictl images --output json
I0915 20:38:34.632560 245575 containerd.go:657] all images are preloaded for containerd runtime.
I0915 20:38:34.632591 245575 containerd.go:561] Images already preloaded, skipping extraction
I0915 20:38:34.632664 245575 ssh_runner.go:152] Run: sudo systemctl stop -f crio
I0915 20:38:34.667772 245575 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
I0915 20:38:34.709754 245575 docker.go:156] disabling docker service ...
I0915 20:38:34.709827 245575 ssh_runner.go:152] Run: sudo systemctl stop -f docker.socket
I0915 20:38:34.737831 245575 ssh_runner.go:152] Run: sudo systemctl stop -f docker.service
I0915 20:38:34.760009 245575 ssh_runner.go:152] Run: sudo systemctl disable docker.socket
I0915 20:38:35.050283 245575 ssh_runner.go:152] Run: sudo systemctl mask docker.service
I0915 20:38:35.303746 245575 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service docker
I0915 20:38:35.325630 245575 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0915 20:38:35.363722 245575 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuY2dyb3Vwc10KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLmNyaV0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2l
tYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My41IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKCVtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmRdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jXQogICAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgICAgU3lzdGVtZENncm91cCA9IGZhbHNlCgogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLnVudHJ1c3RlZF9
3b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICBbcGx1Z2lucy5jcmkuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0LmQiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy5kaWZmLXNlcnZpY2VdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy5zY2hlZHVsZXJdCiAgICBwYXVzZV90aHJlc2hvbGQgPSAwLjAyCiAgICBkZWxldGlvbl90aHJlc2hvbGQgPSAwCiAgICBtdXRhdGlvbl90aHJlc2hvbGQgPSAxMDAKICAgIHNjaGVkdWxlX2RlbGF5ID0gIjBzIgogICAgc3RhcnR1cF9kZWxheSA9ICIxMDBtcyIK" | base64 -d | sudo tee /etc/containerd/config.toml"
I0915 20:38:35.398364 245575 ssh_runner.go:152] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0915 20:38:35.414747 245575 ssh_runner.go:152] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0915 20:38:35.434342 245575 ssh_runner.go:152] Run: sudo systemctl daemon-reload
I0915 20:38:35.627752 245575 ssh_runner.go:152] Run: sudo systemctl restart containerd
I0915 20:38:35.687625 245575 start.go:393] Will wait 60s for socket path /run/containerd/containerd.sock
I0915 20:38:35.687692 245575 ssh_runner.go:152] Run: stat /run/containerd/containerd.sock
I0915 20:38:35.699584 245575 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0915 20:38:36.804831 245575 ssh_runner.go:152] Run: stat /run/containerd/containerd.sock
I0915 20:38:36.810845 245575 retry.go:31] will retry after 2.160763633s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0915 20:38:38.971867 245575 ssh_runner.go:152] Run: stat /run/containerd/containerd.sock
I0915 20:38:38.980401 245575 start.go:414] Will wait 60s for crictl version
I0915 20:38:38.980482 245575 ssh_runner.go:152] Run: sudo crictl version
I0915 20:38:39.039675 245575 start.go:423] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.4.9
RuntimeApiVersion: v1alpha2
I0915 20:38:39.039748 245575 ssh_runner.go:152] Run: containerd --version
I0915 20:38:39.081468 245575 ssh_runner.go:152] Run: containerd --version
I0915 20:38:39.127273 245575 out.go:177] * Preparing Kubernetes v1.22.1 on containerd 1.4.9 ...
I0915 20:38:39.127319 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetIP
I0915 20:38:39.133157 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:38:39.133595 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a4:51", ip: ""} in network mk-pause-20210915203607-209669: {Iface:virbr1 ExpiryTime:2021-09-15 21:36:22 +0000 UTC Type:0 Mac:52:54:00:69:a4:51 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:pause-20210915203607-209669 Clientid:01:52:54:00:69:a4:51}
I0915 20:38:39.133633 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined IP address 192.168.39.238 and MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:38:39.133906 245575 ssh_runner.go:152] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0915 20:38:39.141205 245575 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime containerd
I0915 20:38:39.141285 245575 ssh_runner.go:152] Run: sudo crictl images --output json
I0915 20:38:39.200515 245575 containerd.go:657] all images are preloaded for containerd runtime.
I0915 20:38:39.200544 245575 containerd.go:561] Images already preloaded, skipping extraction
I0915 20:38:39.200606 245575 ssh_runner.go:152] Run: sudo crictl images --output json
I0915 20:38:39.252287 245575 containerd.go:657] all images are preloaded for containerd runtime.
I0915 20:38:39.252315 245575 cache_images.go:78] Images are preloaded, skipping loading
I0915 20:38:39.252369 245575 ssh_runner.go:152] Run: sudo crictl info
I0915 20:38:39.330408 245575 cni.go:93] Creating CNI manager for ""
I0915 20:38:39.330438 245575 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
I0915 20:38:39.330450 245575 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0915 20:38:39.330466 245575 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.22.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210915203607-209669 NodeName:pause-20210915203607-209669 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAF
ile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0915 20:38:39.330686 245575 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.238
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "pause-20210915203607-209669"
kubeletExtraArgs:
node-ip: 192.168.39.238
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.22.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0915 20:38:39.330829 245575 kubeadm.go:909] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.22.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=pause-20210915203607-209669 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.238 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.22.1 ClusterName:pause-20210915203607-209669 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0915 20:38:39.330896 245575 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.1
I0915 20:38:39.351406 245575 binaries.go:44] Found k8s binaries, skipping transfer
I0915 20:38:39.351481 245575 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0915 20:38:39.370022 245575 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (543 bytes)
I0915 20:38:39.409896 245575 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0915 20:38:39.457272 245575 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2083 bytes)
I0915 20:38:39.492505 245575 ssh_runner.go:152] Run: grep 192.168.39.238 control-plane.minikube.internal$ /etc/hosts
I0915 20:38:39.500299 245575 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/pause-20210915203607-209669 for IP: 192.168.39.238
I0915 20:38:39.500395 245575 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/ca.key
I0915 20:38:39.500422 245575 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/proxy-client-ca.key
I0915 20:38:39.500490 245575 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/pause-20210915203607-209669/client.key
I0915 20:38:39.500513 245575 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/pause-20210915203607-209669/apiserver.key.db159236
I0915 20:38:39.500539 245575 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/pause-20210915203607-209669/proxy-client.key
I0915 20:38:39.500676 245575 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/209669.pem (1338 bytes)
W0915 20:38:39.500734 245575 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/209669_empty.pem, impossibly tiny 0 bytes
I0915 20:38:39.500753 245575 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/ca-key.pem (1675 bytes)
I0915 20:38:39.500792 245575 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/ca.pem (1078 bytes)
I0915 20:38:39.500823 245575 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/cert.pem (1123 bytes)
I0915 20:38:39.500857 245575 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/key.pem (1679 bytes)
I0915 20:38:39.500911 245575 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/files/etc/ssl/certs/2096692.pem (1708 bytes)
I0915 20:38:39.502288 245575 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/pause-20210915203607-209669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0915 20:38:39.555380 245575 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/pause-20210915203607-209669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0915 20:38:39.599955 245575 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/pause-20210915203607-209669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0915 20:38:39.688714 245575 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/pause-20210915203607-209669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0915 20:38:39.744160 245575 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0915 20:38:39.813999 245575 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0915 20:38:39.868268 245575 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0915 20:38:39.923612 245575 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0915 20:38:39.997457 245575 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/files/etc/ssl/certs/2096692.pem --> /usr/share/ca-certificates/2096692.pem (1708 bytes)
I0915 20:38:40.054949 245575 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0915 20:38:40.106574 245575 ssh_runner.go:319] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/certs/209669.pem --> /usr/share/ca-certificates/209669.pem (1338 bytes)
I0915 20:38:40.165104 245575 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0915 20:38:40.197296 245575 ssh_runner.go:152] Run: openssl version
I0915 20:38:40.218819 245575 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2096692.pem && ln -fs /usr/share/ca-certificates/2096692.pem /etc/ssl/certs/2096692.pem"
I0915 20:38:40.244301 245575 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/2096692.pem
I0915 20:38:40.268938 245575 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Sep 15 19:57 /usr/share/ca-certificates/2096692.pem
I0915 20:38:40.269010 245575 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2096692.pem
I0915 20:38:40.279424 245575 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2096692.pem /etc/ssl/certs/3ec20f2e.0"
I0915 20:38:40.299516 245575 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0915 20:38:40.326455 245575 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0915 20:38:40.341177 245575 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Sep 15 19:50 /usr/share/ca-certificates/minikubeCA.pem
I0915 20:38:40.341248 245575 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0915 20:38:40.351247 245575 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0915 20:38:40.379422 245575 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/209669.pem && ln -fs /usr/share/ca-certificates/209669.pem /etc/ssl/certs/209669.pem"
I0915 20:38:40.405356 245575 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/209669.pem
I0915 20:38:40.413163 245575 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Sep 15 19:57 /usr/share/ca-certificates/209669.pem
I0915 20:38:40.413243 245575 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/209669.pem
I0915 20:38:40.420753 245575 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/209669.pem /etc/ssl/certs/51391683.0"
I0915 20:38:40.439762 245575 kubeadm.go:390] StartCluster: {Name:pause-20210915203607-209669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12425/minikube-v1.23.0-1631662909-12425.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 Cl
usterName:pause-20210915203607-209669 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0915 20:38:40.439871 245575 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0915 20:38:40.439946 245575 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0915 20:38:40.545374 245575 cri.go:76] found id: "c90a36c8826cb3a93ce5571908717dbce4d6e5ad1fa18ad1b8831160513af998"
I0915 20:38:40.545403 245575 cri.go:76] found id: "c5bb0978f46b75e4e46f992fbbd0b75e87860eae012833ee5eee71961cf73ec0"
I0915 20:38:40.545411 245575 cri.go:76] found id: "1998aa3f2b7fb4f017287f15b6e0cc74fbe632c72ab8fe96725dc0e913535344"
I0915 20:38:40.545417 245575 cri.go:76] found id: "98406da5a6e7fa5ff726f04de6c7bd516e915e9548f1cfaafdc570c67a9efdb3"
I0915 20:38:40.545423 245575 cri.go:76] found id: "a7d0a1d02daf89a0426def3f5431ef7b3d2901efc93472317061d8b3bdefb049"
I0915 20:38:40.545430 245575 cri.go:76] found id: "c2c4d5c63cd0645ef4d73235208f1936b90f8d174ae4bcf1af216cb1a623700b"
I0915 20:38:40.545436 245575 cri.go:76] found id: ""
I0915 20:38:40.545490 245575 ssh_runner.go:152] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I0915 20:38:40.586047 245575 cri.go:103] JSON = null
W0915 20:38:40.586100 245575 kubeadm.go:397] unpause failed: list paused: list returned 0 containers, but ps returned 6
I0915 20:38:40.586173 245575 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0915 20:38:40.606673 245575 kubeadm.go:401] found existing configuration files, will attempt cluster restart
I0915 20:38:40.606697 245575 kubeadm.go:600] restartCluster start
I0915 20:38:40.606748 245575 ssh_runner.go:152] Run: sudo test -d /data/minikube
I0915 20:38:40.643177 245575 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0915 20:38:40.644361 245575 kubeconfig.go:93] found "pause-20210915203607-209669" server: "https://192.168.39.238:8443"
I0915 20:38:40.645249 245575 kapi.go:59] client config for pause-20210915203607-209669: &rest.Config{Host:"https://192.168.39.238:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/pause-20210915203607-209669/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/pause-2021091520360
7-209669/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1581620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0915 20:38:40.647150 245575 ssh_runner.go:152] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0915 20:38:40.692369 245575 api_server.go:164] Checking apiserver status ...
I0915 20:38:40.692543 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 20:38:40.718328 245575 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 20:38:40.918823 245575 api_server.go:164] Checking apiserver status ...
I0915 20:38:40.918888 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 20:38:40.941408 245575 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 20:38:41.118528 245575 api_server.go:164] Checking apiserver status ...
I0915 20:38:41.118630 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 20:38:41.138039 245575 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 20:38:41.319302 245575 api_server.go:164] Checking apiserver status ...
I0915 20:38:41.319403 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 20:38:41.340749 245575 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 20:38:41.519002 245575 api_server.go:164] Checking apiserver status ...
I0915 20:38:41.519104 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 20:38:41.566085 245575 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 20:38:41.719305 245575 api_server.go:164] Checking apiserver status ...
I0915 20:38:41.719413 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 20:38:41.758720 245575 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 20:38:41.919055 245575 api_server.go:164] Checking apiserver status ...
I0915 20:38:41.919147 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 20:38:41.972780 245575 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 20:38:42.118978 245575 api_server.go:164] Checking apiserver status ...
I0915 20:38:42.119096 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 20:38:42.171469 245575 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 20:38:42.318789 245575 api_server.go:164] Checking apiserver status ...
I0915 20:38:42.318873 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 20:38:42.354458 245575 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 20:38:42.518809 245575 api_server.go:164] Checking apiserver status ...
I0915 20:38:42.518887 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 20:38:42.551249 245575 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 20:38:42.718546 245575 api_server.go:164] Checking apiserver status ...
I0915 20:38:42.718629 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 20:38:42.769186 245575 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 20:38:42.918488 245575 api_server.go:164] Checking apiserver status ...
I0915 20:38:42.918581 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 20:38:42.982465 245575 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 20:38:43.119325 245575 api_server.go:164] Checking apiserver status ...
I0915 20:38:43.119421 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 20:38:43.165362 245575 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 20:38:43.319470 245575 api_server.go:164] Checking apiserver status ...
I0915 20:38:43.319570 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 20:38:43.378202 245575 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 20:38:43.518560 245575 api_server.go:164] Checking apiserver status ...
I0915 20:38:43.518648 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 20:38:43.578627 245575 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 20:38:43.718846 245575 api_server.go:164] Checking apiserver status ...
I0915 20:38:43.718931 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 20:38:43.744191 245575 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 20:38:43.744223 245575 api_server.go:164] Checking apiserver status ...
I0915 20:38:43.744281 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0915 20:38:43.781494 245575 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0915 20:38:43.781530 245575 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
I0915 20:38:43.781540 245575 kubeadm.go:1032] stopping kube-system containers ...
I0915 20:38:43.781553 245575 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0915 20:38:43.781615 245575 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0915 20:38:43.953095 245575 cri.go:76] found id: "d0a53108ac9c335a7f9f23fe4fc0cacebb805ce2e8e234e45d65384428b0269c"
I0915 20:38:43.953136 245575 cri.go:76] found id: "85f3e86ba3483821d43d8a9db6620b5b51676ceafad4f1b3d2647491f21f21b4"
I0915 20:38:43.953143 245575 cri.go:76] found id: "92765dba236c71d174f12db9fb303c5e8e28954e4784730673a1ba475c61e593"
I0915 20:38:43.953149 245575 cri.go:76] found id: "c90a36c8826cb3a93ce5571908717dbce4d6e5ad1fa18ad1b8831160513af998"
I0915 20:38:43.953155 245575 cri.go:76] found id: "c5bb0978f46b75e4e46f992fbbd0b75e87860eae012833ee5eee71961cf73ec0"
I0915 20:38:43.953163 245575 cri.go:76] found id: "1998aa3f2b7fb4f017287f15b6e0cc74fbe632c72ab8fe96725dc0e913535344"
I0915 20:38:43.953169 245575 cri.go:76] found id: "98406da5a6e7fa5ff726f04de6c7bd516e915e9548f1cfaafdc570c67a9efdb3"
I0915 20:38:43.953175 245575 cri.go:76] found id: "a7d0a1d02daf89a0426def3f5431ef7b3d2901efc93472317061d8b3bdefb049"
I0915 20:38:43.953182 245575 cri.go:76] found id: "c2c4d5c63cd0645ef4d73235208f1936b90f8d174ae4bcf1af216cb1a623700b"
I0915 20:38:43.953196 245575 cri.go:76] found id: ""
I0915 20:38:43.953206 245575 cri.go:220] Stopping containers: [d0a53108ac9c335a7f9f23fe4fc0cacebb805ce2e8e234e45d65384428b0269c 85f3e86ba3483821d43d8a9db6620b5b51676ceafad4f1b3d2647491f21f21b4 92765dba236c71d174f12db9fb303c5e8e28954e4784730673a1ba475c61e593 c90a36c8826cb3a93ce5571908717dbce4d6e5ad1fa18ad1b8831160513af998 c5bb0978f46b75e4e46f992fbbd0b75e87860eae012833ee5eee71961cf73ec0 1998aa3f2b7fb4f017287f15b6e0cc74fbe632c72ab8fe96725dc0e913535344 98406da5a6e7fa5ff726f04de6c7bd516e915e9548f1cfaafdc570c67a9efdb3 a7d0a1d02daf89a0426def3f5431ef7b3d2901efc93472317061d8b3bdefb049 c2c4d5c63cd0645ef4d73235208f1936b90f8d174ae4bcf1af216cb1a623700b]
I0915 20:38:43.953261 245575 ssh_runner.go:152] Run: which crictl
I0915 20:38:43.969882 245575 ssh_runner.go:152] Run: sudo /usr/bin/crictl stop d0a53108ac9c335a7f9f23fe4fc0cacebb805ce2e8e234e45d65384428b0269c 85f3e86ba3483821d43d8a9db6620b5b51676ceafad4f1b3d2647491f21f21b4 92765dba236c71d174f12db9fb303c5e8e28954e4784730673a1ba475c61e593 c90a36c8826cb3a93ce5571908717dbce4d6e5ad1fa18ad1b8831160513af998 c5bb0978f46b75e4e46f992fbbd0b75e87860eae012833ee5eee71961cf73ec0 1998aa3f2b7fb4f017287f15b6e0cc74fbe632c72ab8fe96725dc0e913535344 98406da5a6e7fa5ff726f04de6c7bd516e915e9548f1cfaafdc570c67a9efdb3 a7d0a1d02daf89a0426def3f5431ef7b3d2901efc93472317061d8b3bdefb049 c2c4d5c63cd0645ef4d73235208f1936b90f8d174ae4bcf1af216cb1a623700b
I0915 20:38:44.211280 245575 ssh_runner.go:152] Run: sudo systemctl stop kubelet
I0915 20:38:44.308313 245575 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0915 20:38:44.381739 245575 kubeadm.go:154] found existing configuration files:
-rw------- 1 root root 5643 Sep 15 20:36 /etc/kubernetes/admin.conf
-rw------- 1 root root 5658 Sep 15 20:36 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2047 Sep 15 20:37 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5606 Sep 15 20:36 /etc/kubernetes/scheduler.conf
I0915 20:38:44.381821 245575 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0915 20:38:44.414248 245575 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0915 20:38:44.442437 245575 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0915 20:38:44.466417 245575 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0915 20:38:44.466486 245575 ssh_runner.go:152] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0915 20:38:44.485445 245575 ssh_runner.go:152] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0915 20:38:44.526296 245575 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0915 20:38:44.526367 245575 ssh_runner.go:152] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0915 20:38:44.553110 245575 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0915 20:38:44.574383 245575 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0915 20:38:44.574415 245575 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0915 20:38:44.811707 245575 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0915 20:38:45.788065 245575 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0915 20:38:46.114216 245575 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0915 20:38:46.257631 245575 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0915 20:38:46.370661 245575 api_server.go:50] waiting for apiserver process to appear ...
I0915 20:38:46.370738 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:38:46.886002 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:38:47.386195 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:38:47.886103 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:38:48.385496 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:38:48.886096 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:38:49.385850 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:38:49.885515 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:38:50.385922 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:38:50.885441 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:38:51.385454 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:38:51.885466 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:38:52.386320 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:38:52.885712 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:38:53.385495 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:38:53.886024 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:38:53.906012 245575 api_server.go:70] duration metric: took 7.53534608s to wait for apiserver process to appear ...
I0915 20:38:53.906042 245575 api_server.go:86] waiting for apiserver healthz status ...
I0915 20:38:53.906055 245575 api_server.go:239] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
I0915 20:38:58.906394 245575 api_server.go:255] stopped: https://192.168.39.238:8443/healthz: Get "https://192.168.39.238:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0915 20:38:59.406698 245575 api_server.go:239] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
I0915 20:38:59.503453 245575 api_server.go:265] https://192.168.39.238:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0915 20:38:59.503571 245575 api_server.go:101] status: https://192.168.39.238:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0915 20:38:59.906599 245575 api_server.go:239] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
I0915 20:38:59.915525 245575 api_server.go:265] https://192.168.39.238:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0915 20:38:59.915560 245575 api_server.go:101] status: https://192.168.39.238:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0915 20:39:00.406863 245575 api_server.go:239] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
I0915 20:39:00.417113 245575 api_server.go:265] https://192.168.39.238:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0915 20:39:00.417144 245575 api_server.go:101] status: https://192.168.39.238:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0915 20:39:00.906742 245575 api_server.go:239] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
I0915 20:39:00.935483 245575 api_server.go:265] https://192.168.39.238:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0915 20:39:00.935520 245575 api_server.go:101] status: https://192.168.39.238:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0915 20:39:01.406812 245575 api_server.go:239] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
I0915 20:39:01.417924 245575 api_server.go:265] https://192.168.39.238:8443/healthz returned 200:
ok
I0915 20:39:01.430680 245575 api_server.go:139] control plane version: v1.22.1
I0915 20:39:01.430707 245575 api_server.go:129] duration metric: took 7.524658169s to wait for apiserver health ...
I0915 20:39:01.430722 245575 cni.go:93] Creating CNI manager for ""
I0915 20:39:01.430730 245575 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
I0915 20:39:01.432806 245575 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0915 20:39:01.432896 245575 ssh_runner.go:152] Run: sudo mkdir -p /etc/cni/net.d
I0915 20:39:01.449473 245575 ssh_runner.go:319] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0915 20:39:01.479646 245575 system_pods.go:43] waiting for kube-system pods to appear ...
I0915 20:39:01.503106 245575 system_pods.go:59] 6 kube-system pods found
I0915 20:39:01.503143 245575 system_pods.go:61] "coredns-78fcd69978-stp22" [3ad260f2-128d-46b8-9f0c-33929b1c2e24] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0915 20:39:01.503152 245575 system_pods.go:61] "etcd-pause-20210915203607-209669" [ce205553-77a4-4249-9b12-a0dacdf44990] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0915 20:39:01.503157 245575 system_pods.go:61] "kube-apiserver-pause-20210915203607-209669" [908c5f60-402f-4dd2-93b8-ba8a66e765e2] Running
I0915 20:39:01.503163 245575 system_pods.go:61] "kube-controller-manager-pause-20210915203607-209669" [31f973e9-2c44-4ac0-8559-ead277ece9ef] Running
I0915 20:39:01.503171 245575 system_pods.go:61] "kube-proxy-knsd4" [cd8c788e-8ca0-46be-be18-92e8ff747405] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0915 20:39:01.503178 245575 system_pods.go:61] "kube-scheduler-pause-20210915203607-209669" [e607232d-37b4-470c-a390-e1e9139b5f13] Running
I0915 20:39:01.503185 245575 system_pods.go:74] duration metric: took 23.514464ms to wait for pod list to return data ...
I0915 20:39:01.503198 245575 node_conditions.go:102] verifying NodePressure condition ...
I0915 20:39:01.526084 245575 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0915 20:39:01.526119 245575 node_conditions.go:123] node cpu capacity is 2
I0915 20:39:01.526133 245575 node_conditions.go:105] duration metric: took 22.930048ms to run NodePressure ...
I0915 20:39:01.526152 245575 ssh_runner.go:152] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.1:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0915 20:39:02.226530 245575 kubeadm.go:731] waiting for restarted kubelet to initialise ...
I0915 20:39:02.233460 245575 kubeadm.go:746] kubelet initialised
I0915 20:39:02.233483 245575 kubeadm.go:747] duration metric: took 6.927221ms waiting for restarted kubelet to initialise ...
I0915 20:39:02.233493 245575 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0915 20:39:02.239984 245575 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-stp22" in "kube-system" namespace to be "Ready" ...
I0915 20:39:04.269363 245575 pod_ready.go:102] pod "coredns-78fcd69978-stp22" in "kube-system" namespace has status "Ready":"False"
I0915 20:39:06.274676 245575 pod_ready.go:102] pod "coredns-78fcd69978-stp22" in "kube-system" namespace has status "Ready":"False"
I0915 20:39:07.771058 245575 pod_ready.go:92] pod "coredns-78fcd69978-stp22" in "kube-system" namespace has status "Ready":"True"
I0915 20:39:07.771089 245575 pod_ready.go:81] duration metric: took 5.531075625s waiting for pod "coredns-78fcd69978-stp22" in "kube-system" namespace to be "Ready" ...
I0915 20:39:07.771101 245575 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:09.792561 245575 pod_ready.go:102] pod "etcd-pause-20210915203607-209669" in "kube-system" namespace has status "Ready":"False"
I0915 20:39:11.794712 245575 pod_ready.go:102] pod "etcd-pause-20210915203607-209669" in "kube-system" namespace has status "Ready":"False"
I0915 20:39:12.800639 245575 pod_ready.go:92] pod "etcd-pause-20210915203607-209669" in "kube-system" namespace has status "Ready":"True"
I0915 20:39:12.800667 245575 pod_ready.go:81] duration metric: took 5.029557398s waiting for pod "etcd-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:12.800681 245575 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:12.817205 245575 pod_ready.go:92] pod "kube-apiserver-pause-20210915203607-209669" in "kube-system" namespace has status "Ready":"True"
I0915 20:39:12.817230 245575 pod_ready.go:81] duration metric: took 16.5408ms waiting for pod "kube-apiserver-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:12.817244 245575 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:12.825054 245575 pod_ready.go:92] pod "kube-controller-manager-pause-20210915203607-209669" in "kube-system" namespace has status "Ready":"True"
I0915 20:39:12.825074 245575 pod_ready.go:81] duration metric: took 7.820195ms waiting for pod "kube-controller-manager-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:12.825087 245575 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-knsd4" in "kube-system" namespace to be "Ready" ...
I0915 20:39:12.833898 245575 pod_ready.go:92] pod "kube-proxy-knsd4" in "kube-system" namespace has status "Ready":"True"
I0915 20:39:12.833918 245575 pod_ready.go:81] duration metric: took 8.822729ms waiting for pod "kube-proxy-knsd4" in "kube-system" namespace to be "Ready" ...
I0915 20:39:12.833930 245575 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:12.842840 245575 pod_ready.go:92] pod "kube-scheduler-pause-20210915203607-209669" in "kube-system" namespace has status "Ready":"True"
I0915 20:39:12.842859 245575 pod_ready.go:81] duration metric: took 8.920617ms waiting for pod "kube-scheduler-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:12.842867 245575 pod_ready.go:38] duration metric: took 10.60936207s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0915 20:39:12.842884 245575 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0915 20:39:12.861981 245575 ops.go:34] apiserver oom_adj: -16
I0915 20:39:12.862008 245575 kubeadm.go:604] restartCluster took 32.255304538s
I0915 20:39:12.862020 245575 kubeadm.go:392] StartCluster complete in 32.422276287s
I0915 20:39:12.862042 245575 settings.go:142] acquiring lock: {Name:mkfc37509693550eccd0e71c394b45ae19284b33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 20:39:12.862165 245575 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/kubeconfig
I0915 20:39:12.864944 245575 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/kubeconfig: {Name:mk9278cd771b7532ccd274781f4709675fdbf421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 20:39:12.866889 245575 kapi.go:59] client config for pause-20210915203607-209669: &rest.Config{Host:"https://192.168.39.238:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/pause-20210915203607-209669/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/pause-2021091520360
7-209669/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1581620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0915 20:39:12.874399 245575 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210915203607-209669" rescaled to 1
I0915 20:39:12.874461 245575 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}
I0915 20:39:12.876389 245575 out.go:177] * Verifying Kubernetes components...
I0915 20:39:12.876459 245575 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
I0915 20:39:12.874498 245575 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0915 20:39:12.874555 245575 addons.go:404] enableAddons start: toEnable=map[], additional=[]
I0915 20:39:12.874672 245575 config.go:177] Loaded profile config "pause-20210915203607-209669": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.1
I0915 20:39:12.876620 245575 addons.go:65] Setting default-storageclass=true in profile "pause-20210915203607-209669"
I0915 20:39:12.876637 245575 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210915203607-209669"
I0915 20:39:12.876604 245575 addons.go:65] Setting storage-provisioner=true in profile "pause-20210915203607-209669"
I0915 20:39:12.876721 245575 addons.go:153] Setting addon storage-provisioner=true in "pause-20210915203607-209669"
W0915 20:39:12.876736 245575 addons.go:165] addon storage-provisioner should already be in state true
I0915 20:39:12.876764 245575 host.go:66] Checking if "pause-20210915203607-209669" exists ...
I0915 20:39:12.877160 245575 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0915 20:39:12.877195 245575 main.go:130] libmachine: Launching plugin server for driver kvm2
I0915 20:39:12.877196 245575 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0915 20:39:12.877228 245575 main.go:130] libmachine: Launching plugin server for driver kvm2
I0915 20:39:12.892262 245575 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41625
I0915 20:39:12.892262 245575 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45495
I0915 20:39:12.892938 245575 main.go:130] libmachine: () Calling .GetVersion
I0915 20:39:12.893099 245575 main.go:130] libmachine: () Calling .GetVersion
I0915 20:39:12.893575 245575 main.go:130] libmachine: Using API Version 1
I0915 20:39:12.893594 245575 main.go:130] libmachine: () Calling .SetConfigRaw
I0915 20:39:12.893606 245575 main.go:130] libmachine: Using API Version 1
I0915 20:39:12.893624 245575 main.go:130] libmachine: () Calling .SetConfigRaw
I0915 20:39:12.894068 245575 main.go:130] libmachine: () Calling .GetMachineName
I0915 20:39:12.894074 245575 main.go:130] libmachine: () Calling .GetMachineName
I0915 20:39:12.894257 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetState
I0915 20:39:12.894671 245575 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0915 20:39:12.894707 245575 main.go:130] libmachine: Launching plugin server for driver kvm2
I0915 20:39:12.899945 245575 kapi.go:59] client config for pause-20210915203607-209669: &rest.Config{Host:"https://192.168.39.238:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/pause-20210915203607-209669/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/pause-2021091520360
7-209669/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1581620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0915 20:39:12.908967 245575 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41139
I0915 20:39:12.909550 245575 main.go:130] libmachine: () Calling .GetVersion
I0915 20:39:12.910078 245575 main.go:130] libmachine: Using API Version 1
I0915 20:39:12.910102 245575 main.go:130] libmachine: () Calling .SetConfigRaw
I0915 20:39:12.910461 245575 addons.go:153] Setting addon default-storageclass=true in "pause-20210915203607-209669"
W0915 20:39:12.910482 245575 addons.go:165] addon default-storageclass should already be in state true
I0915 20:39:12.910500 245575 main.go:130] libmachine: () Calling .GetMachineName
I0915 20:39:12.910512 245575 host.go:66] Checking if "pause-20210915203607-209669" exists ...
I0915 20:39:12.910776 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetState
I0915 20:39:12.911207 245575 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0915 20:39:12.911250 245575 main.go:130] libmachine: Launching plugin server for driver kvm2
I0915 20:39:12.915004 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .DriverName
I0915 20:39:12.917220 245575 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0915 20:39:12.917382 245575 addons.go:337] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0915 20:39:12.917395 245575 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0915 20:39:12.917416 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHHostname
I0915 20:39:12.921439 245575 node_ready.go:35] waiting up to 6m0s for node "pause-20210915203607-209669" to be "Ready" ...
I0915 20:39:12.924816 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:39:12.925949 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a4:51", ip: ""} in network mk-pause-20210915203607-209669: {Iface:virbr1 ExpiryTime:2021-09-15 21:36:22 +0000 UTC Type:0 Mac:52:54:00:69:a4:51 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:pause-20210915203607-209669 Clientid:01:52:54:00:69:a4:51}
I0915 20:39:12.925992 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined IP address 192.168.39.238 and MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:39:12.926459 245575 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37001
I0915 20:39:12.926940 245575 main.go:130] libmachine: () Calling .GetVersion
I0915 20:39:12.927517 245575 main.go:130] libmachine: Using API Version 1
I0915 20:39:12.927535 245575 main.go:130] libmachine: () Calling .SetConfigRaw
I0915 20:39:12.927957 245575 main.go:130] libmachine: () Calling .GetMachineName
I0915 20:39:12.928592 245575 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0915 20:39:12.928641 245575 main.go:130] libmachine: Launching plugin server for driver kvm2
I0915 20:39:12.940843 245575 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34415
I0915 20:39:12.941308 245575 main.go:130] libmachine: () Calling .GetVersion
I0915 20:39:12.941810 245575 main.go:130] libmachine: Using API Version 1
I0915 20:39:12.941840 245575 main.go:130] libmachine: () Calling .SetConfigRaw
I0915 20:39:12.942162 245575 main.go:130] libmachine: () Calling .GetMachineName
I0915 20:39:12.942382 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetState
I0915 20:39:12.986559 245575 node_ready.go:49] node "pause-20210915203607-209669" has status "Ready":"True"
I0915 20:39:12.986614 245575 node_ready.go:38] duration metric: took 65.147288ms waiting for node "pause-20210915203607-209669" to be "Ready" ...
I0915 20:39:12.986627 245575 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0915 20:39:13.038672 245575 start.go:709] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0915 20:39:13.130251 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHPort
I0915 20:39:13.130534 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHKeyPath
I0915 20:39:13.130744 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHUsername
I0915 20:39:13.130939 245575 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/machines/pause-20210915203607-209669/id_rsa Username:docker}
I0915 20:39:13.133269 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .DriverName
I0915 20:39:13.133633 245575 addons.go:337] installing /etc/kubernetes/addons/storageclass.yaml
I0915 20:39:13.133653 245575 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0915 20:39:13.133672 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHHostname
I0915 20:39:13.139707 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:39:13.140169 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a4:51", ip: ""} in network mk-pause-20210915203607-209669: {Iface:virbr1 ExpiryTime:2021-09-15 21:36:22 +0000 UTC Type:0 Mac:52:54:00:69:a4:51 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:pause-20210915203607-209669 Clientid:01:52:54:00:69:a4:51}
I0915 20:39:13.140200 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | domain pause-20210915203607-209669 has defined IP address 192.168.39.238 and MAC address 52:54:00:69:a4:51 in network mk-pause-20210915203607-209669
I0915 20:39:13.140387 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHPort
I0915 20:39:13.140560 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHKeyPath
I0915 20:39:13.140721 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .GetSSHUsername
I0915 20:39:13.140903 245575 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/machines/pause-20210915203607-209669/id_rsa Username:docker}
I0915 20:39:13.190280 245575 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-stp22" in "kube-system" namespace to be "Ready" ...
I0915 20:39:13.267877 245575 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0915 20:39:13.270793 245575 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0915 20:39:13.588666 245575 pod_ready.go:92] pod "coredns-78fcd69978-stp22" in "kube-system" namespace has status "Ready":"True"
I0915 20:39:13.588692 245575 pod_ready.go:81] duration metric: took 398.376975ms waiting for pod "coredns-78fcd69978-stp22" in "kube-system" namespace to be "Ready" ...
I0915 20:39:13.588705 245575 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:13.872217 245575 main.go:130] libmachine: Making call to close driver server
I0915 20:39:13.872248 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .Close
I0915 20:39:13.872306 245575 main.go:130] libmachine: Making call to close driver server
I0915 20:39:13.872327 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .Close
I0915 20:39:13.872524 245575 main.go:130] libmachine: Successfully made call to close driver server
I0915 20:39:13.872537 245575 main.go:130] libmachine: Making call to close connection to plugin binary
I0915 20:39:13.872554 245575 main.go:130] libmachine: Making call to close driver server
I0915 20:39:13.872563 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .Close
I0915 20:39:13.872704 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | Closing plugin on server side
I0915 20:39:13.872721 245575 main.go:130] libmachine: Successfully made call to close driver server
I0915 20:39:13.872735 245575 main.go:130] libmachine: Making call to close connection to plugin binary
I0915 20:39:13.872745 245575 main.go:130] libmachine: Making call to close driver server
I0915 20:39:13.872756 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .Close
I0915 20:39:13.874217 245575 main.go:130] libmachine: Successfully made call to close driver server
I0915 20:39:13.874223 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | Closing plugin on server side
I0915 20:39:13.874236 245575 main.go:130] libmachine: Making call to close connection to plugin binary
I0915 20:39:13.874215 245575 main.go:130] libmachine: (pause-20210915203607-209669) DBG | Closing plugin on server side
I0915 20:39:13.874251 245575 main.go:130] libmachine: Successfully made call to close driver server
I0915 20:39:13.874260 245575 main.go:130] libmachine: Making call to close connection to plugin binary
I0915 20:39:13.874272 245575 main.go:130] libmachine: Making call to close driver server
I0915 20:39:13.874281 245575 main.go:130] libmachine: (pause-20210915203607-209669) Calling .Close
I0915 20:39:13.874624 245575 main.go:130] libmachine: Successfully made call to close driver server
I0915 20:39:13.874648 245575 main.go:130] libmachine: Making call to close connection to plugin binary
I0915 20:39:13.876656 245575 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0915 20:39:13.876680 245575 addons.go:406] enableAddons completed in 1.002133551s
I0915 20:39:13.986739 245575 pod_ready.go:92] pod "etcd-pause-20210915203607-209669" in "kube-system" namespace has status "Ready":"True"
I0915 20:39:13.986756 245575 pod_ready.go:81] duration metric: took 398.045034ms waiting for pod "etcd-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:13.986765 245575 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:14.388481 245575 pod_ready.go:92] pod "kube-apiserver-pause-20210915203607-209669" in "kube-system" namespace has status "Ready":"True"
I0915 20:39:14.388508 245575 pod_ready.go:81] duration metric: took 401.735512ms waiting for pod "kube-apiserver-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:14.388526 245575 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:14.789659 245575 pod_ready.go:92] pod "kube-controller-manager-pause-20210915203607-209669" in "kube-system" namespace has status "Ready":"True"
I0915 20:39:14.789685 245575 pod_ready.go:81] duration metric: took 401.149421ms waiting for pod "kube-controller-manager-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:14.789701 245575 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-knsd4" in "kube-system" namespace to be "Ready" ...
I0915 20:39:15.187668 245575 pod_ready.go:92] pod "kube-proxy-knsd4" in "kube-system" namespace has status "Ready":"True"
I0915 20:39:15.187690 245575 pod_ready.go:81] duration metric: took 397.98111ms waiting for pod "kube-proxy-knsd4" in "kube-system" namespace to be "Ready" ...
I0915 20:39:15.187704 245575 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:15.589496 245575 pod_ready.go:92] pod "kube-scheduler-pause-20210915203607-209669" in "kube-system" namespace has status "Ready":"True"
I0915 20:39:15.589522 245575 pod_ready.go:81] duration metric: took 401.810283ms waiting for pod "kube-scheduler-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:15.589533 245575 pod_ready.go:38] duration metric: took 2.60289376s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0915 20:39:15.589552 245575 api_server.go:50] waiting for apiserver process to appear ...
I0915 20:39:15.589606 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:39:15.607433 245575 api_server.go:70] duration metric: took 2.732939273s to wait for apiserver process to appear ...
I0915 20:39:15.607465 245575 api_server.go:86] waiting for apiserver healthz status ...
I0915 20:39:15.607479 245575 api_server.go:239] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
I0915 20:39:15.617761 245575 api_server.go:265] https://192.168.39.238:8443/healthz returned 200:
ok
I0915 20:39:15.620763 245575 api_server.go:139] control plane version: v1.22.1
I0915 20:39:15.620789 245575 api_server.go:129] duration metric: took 13.316097ms to wait for apiserver health ...
I0915 20:39:15.620807 245575 system_pods.go:43] waiting for kube-system pods to appear ...
I0915 20:39:15.792076 245575 system_pods.go:59] 7 kube-system pods found
I0915 20:39:15.792125 245575 system_pods.go:61] "coredns-78fcd69978-stp22" [3ad260f2-128d-46b8-9f0c-33929b1c2e24] Running
I0915 20:39:15.792133 245575 system_pods.go:61] "etcd-pause-20210915203607-209669" [ce205553-77a4-4249-9b12-a0dacdf44990] Running
I0915 20:39:15.792140 245575 system_pods.go:61] "kube-apiserver-pause-20210915203607-209669" [908c5f60-402f-4dd2-93b8-ba8a66e765e2] Running
I0915 20:39:15.792148 245575 system_pods.go:61] "kube-controller-manager-pause-20210915203607-209669" [31f973e9-2c44-4ac0-8559-ead277ece9ef] Running
I0915 20:39:15.792154 245575 system_pods.go:61] "kube-proxy-knsd4" [cd8c788e-8ca0-46be-be18-92e8ff747405] Running
I0915 20:39:15.792160 245575 system_pods.go:61] "kube-scheduler-pause-20210915203607-209669" [e607232d-37b4-470c-a390-e1e9139b5f13] Running
I0915 20:39:15.792170 245575 system_pods.go:61] "storage-provisioner" [42831a1f-205d-4116-b49c-dbc188c15aa2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0915 20:39:15.792182 245575 system_pods.go:74] duration metric: took 171.36896ms to wait for pod list to return data ...
I0915 20:39:15.792196 245575 default_sa.go:34] waiting for default service account to be created ...
I0915 20:39:15.990061 245575 default_sa.go:45] found service account: "default"
I0915 20:39:15.990089 245575 default_sa.go:55] duration metric: took 197.884088ms for default service account to be created ...
I0915 20:39:15.990103 245575 system_pods.go:116] waiting for k8s-apps to be running ...
I0915 20:39:16.191388 245575 system_pods.go:86] 7 kube-system pods found
I0915 20:39:16.191418 245575 system_pods.go:89] "coredns-78fcd69978-stp22" [3ad260f2-128d-46b8-9f0c-33929b1c2e24] Running
I0915 20:39:16.191425 245575 system_pods.go:89] "etcd-pause-20210915203607-209669" [ce205553-77a4-4249-9b12-a0dacdf44990] Running
I0915 20:39:16.191430 245575 system_pods.go:89] "kube-apiserver-pause-20210915203607-209669" [908c5f60-402f-4dd2-93b8-ba8a66e765e2] Running
I0915 20:39:16.191436 245575 system_pods.go:89] "kube-controller-manager-pause-20210915203607-209669" [31f973e9-2c44-4ac0-8559-ead277ece9ef] Running
I0915 20:39:16.191441 245575 system_pods.go:89] "kube-proxy-knsd4" [cd8c788e-8ca0-46be-be18-92e8ff747405] Running
I0915 20:39:16.191447 245575 system_pods.go:89] "kube-scheduler-pause-20210915203607-209669" [e607232d-37b4-470c-a390-e1e9139b5f13] Running
I0915 20:39:16.191452 245575 system_pods.go:89] "storage-provisioner" [42831a1f-205d-4116-b49c-dbc188c15aa2] Running
I0915 20:39:16.191463 245575 system_pods.go:126] duration metric: took 201.35161ms to wait for k8s-apps to be running ...
I0915 20:39:16.191487 245575 system_svc.go:44] waiting for kubelet service to be running ....
I0915 20:39:16.191550 245575 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
I0915 20:39:16.211671 245575 system_svc.go:56] duration metric: took 20.17233ms WaitForService to wait for kubelet.
I0915 20:39:16.211704 245575 kubeadm.go:547] duration metric: took 3.337218306s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0915 20:39:16.211753 245575 node_conditions.go:102] verifying NodePressure condition ...
I0915 20:39:16.386024 245575 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0915 20:39:16.386054 245575 node_conditions.go:123] node cpu capacity is 2
I0915 20:39:16.386068 245575 node_conditions.go:105] duration metric: took 174.305222ms to run NodePressure ...
I0915 20:39:16.386082 245575 start.go:231] waiting for startup goroutines ...
I0915 20:39:16.455483 245575 start.go:462] kubectl: 1.20.5, cluster: 1.22.1 (minor skew: 2)
I0915 20:39:16.457515 245575 out.go:177]
W0915 20:39:16.457720 245575 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.1.
! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.1.
I0915 20:39:16.459296 245575 out.go:177] - Want kubectl v1.22.1? Try 'minikube kubectl -- get pods -A'
I0915 20:39:16.460903 245575 out.go:177] * Done! kubectl is now configured to use "pause-20210915203607-209669" cluster and "default" namespace by default
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210915203607-209669 -n pause-20210915203607-209669
helpers_test.go:245: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p pause-20210915203607-209669 logs -n 25
=== CONT TestPause/serial/SecondStartNoReconfiguration
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p pause-20210915203607-209669 logs -n 25: (2.063279894s)
helpers_test.go:253: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
| stop | -p | multinode-20210915200256-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:07:58 UTC | Wed, 15 Sep 2021 20:11:04 UTC |
| | multinode-20210915200256-209669 | | | | | |
| start | -p | multinode-20210915200256-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:11:04 UTC | Wed, 15 Sep 2021 20:16:39 UTC |
| | multinode-20210915200256-209669 | | | | | |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| -p | multinode-20210915200256-209669 | multinode-20210915200256-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:16:39 UTC | Wed, 15 Sep 2021 20:16:40 UTC |
| | node delete m03 | | | | | |
| -p | multinode-20210915200256-209669 | multinode-20210915200256-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:16:41 UTC | Wed, 15 Sep 2021 20:19:45 UTC |
| | stop | | | | | |
| start | -p | multinode-20210915200256-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:19:46 UTC | Wed, 15 Sep 2021 20:23:09 UTC |
| | multinode-20210915200256-209669 | | | | | |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p | multinode-20210915200256-209669-m03 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:23:10 UTC | Wed, 15 Sep 2021 20:24:16 UTC |
| | multinode-20210915200256-209669-m03 | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | multinode-20210915200256-209669-m03 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:24:16 UTC | Wed, 15 Sep 2021 20:24:18 UTC |
| | multinode-20210915200256-209669-m03 | | | | | |
| delete | -p | multinode-20210915200256-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:24:18 UTC | Wed, 15 Sep 2021 20:24:20 UTC |
| | multinode-20210915200256-209669 | | | | | |
| start | -p | test-preload-20210915203106-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:31:06 UTC | Wed, 15 Sep 2021 20:33:20 UTC |
| | test-preload-20210915203106-209669 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.17.0 | | | | | |
| ssh | -p | test-preload-20210915203106-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:33:20 UTC | Wed, 15 Sep 2021 20:33:24 UTC |
| | test-preload-20210915203106-209669 | | | | | |
| | -- sudo crictl pull busybox | | | | | |
| start | -p | test-preload-20210915203106-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:33:24 UTC | Wed, 15 Sep 2021 20:34:20 UTC |
| | test-preload-20210915203106-209669 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --wait=true --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.17.3 | | | | | |
| ssh | -p | test-preload-20210915203106-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:34:20 UTC | Wed, 15 Sep 2021 20:34:20 UTC |
| | test-preload-20210915203106-209669 | | | | | |
| | -- sudo crictl image ls | | | | | |
| delete | -p | test-preload-20210915203106-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:34:20 UTC | Wed, 15 Sep 2021 20:34:21 UTC |
| | test-preload-20210915203106-209669 | | | | | |
| start | -p | scheduled-stop-20210915203421-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:34:21 UTC | Wed, 15 Sep 2021 20:35:28 UTC |
| | scheduled-stop-20210915203421-209669 | | | | | |
| | --memory=2048 --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| stop | -p | scheduled-stop-20210915203421-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:35:29 UTC | Wed, 15 Sep 2021 20:35:29 UTC |
| | scheduled-stop-20210915203421-209669 | | | | | |
| | --cancel-scheduled | | | | | |
| stop | -p | scheduled-stop-20210915203421-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:35:41 UTC | Wed, 15 Sep 2021 20:35:48 UTC |
| | scheduled-stop-20210915203421-209669 | | | | | |
| | --schedule 5s | | | | | |
| delete | -p | scheduled-stop-20210915203421-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:36:06 UTC | Wed, 15 Sep 2021 20:36:07 UTC |
| | scheduled-stop-20210915203421-209669 | | | | | |
| start | -p | kubernetes-upgrade-20210915203607-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:36:07 UTC | Wed, 15 Sep 2021 20:37:52 UTC |
| | kubernetes-upgrade-20210915203607-209669 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.14.0 | | | | | |
| | --alsologtostderr -v=1 --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| stop | -p | kubernetes-upgrade-20210915203607-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:37:52 UTC | Wed, 15 Sep 2021 20:37:55 UTC |
| | kubernetes-upgrade-20210915203607-209669 | | | | | |
| start | -p pause-20210915203607-209669 | pause-20210915203607-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:36:07 UTC | Wed, 15 Sep 2021 20:38:07 UTC |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=all --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p | offline-containerd-20210915203607-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:36:07 UTC | Wed, 15 Sep 2021 20:39:12 UTC |
| | offline-containerd-20210915203607-209669 | | | | | |
| | --alsologtostderr -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | offline-containerd-20210915203607-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:39:12 UTC | Wed, 15 Sep 2021 20:39:13 UTC |
| | offline-containerd-20210915203607-209669 | | | | | |
| delete | -p | kubenet-20210915203913-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:39:13 UTC | Wed, 15 Sep 2021 20:39:13 UTC |
| | kubenet-20210915203913-209669 | | | | | |
| delete | -p false-20210915203913-209669 | false-20210915203913-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:39:14 UTC | Wed, 15 Sep 2021 20:39:14 UTC |
| start | -p pause-20210915203607-209669 | pause-20210915203607-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:38:07 UTC | Wed, 15 Sep 2021 20:39:16 UTC |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
*
* ==> Last Start <==
* Log file created at: 2021/09/15 20:39:13
Running on machine: debian-jenkins-agent-8
Binary: Built with gc go1.17 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0915 20:39:13.939255 246034 out.go:298] Setting OutFile to fd 1 ...
I0915 20:39:13.939368 246034 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0915 20:39:13.939382 246034 out.go:311] Setting ErrFile to fd 2...
I0915 20:39:13.939386 246034 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0915 20:39:13.939496 246034 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/bin
I0915 20:39:13.939746 246034 out.go:305] Setting JSON to false
I0915 20:39:13.984915 246034 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":19316,"bootTime":1631719038,"procs":186,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
I0915 20:39:13.985096 246034 start.go:121] virtualization: kvm guest
I0915 20:39:13.987469 246034 out.go:177] * [false-20210915203913-209669] minikube v1.23.0 on Debian 9.13 (kvm/amd64)
I0915 20:39:13.988913 246034 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/kubeconfig
I0915 20:39:13.987620 246034 notify.go:169] Checking for updates...
I0915 20:39:13.990397 246034 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0915 20:39:13.991905 246034 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube
I0915 20:39:13.993401 246034 out.go:177] - MINIKUBE_LOCATION=12425
I0915 20:39:13.993959 246034 config.go:177] Loaded profile config "kubernetes-upgrade-20210915203607-209669": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.2-rc.0
I0915 20:39:13.994074 246034 config.go:177] Loaded profile config "pause-20210915203607-209669": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.1
I0915 20:39:13.994165 246034 config.go:177] Loaded profile config "stopped-upgrade-20210915203607-209669": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0915 20:39:13.994212 246034 driver.go:343] Setting default libvirt URI to qemu:///system
I0915 20:39:14.029976 246034 out.go:177] * Using the kvm2 driver based on user configuration
I0915 20:39:14.030007 246034 start.go:278] selected driver: kvm2
I0915 20:39:14.030013 246034 start.go:751] validating driver "kvm2" against <nil>
I0915 20:39:14.030034 246034 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0915 20:39:14.032368 246034 out.go:177]
W0915 20:39:14.032530 246034 out.go:242] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
I0915 20:39:10.262232 245389 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:39:10.762679 245389 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:39:11.262528 245389 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:39:11.762883 245389 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:39:12.262505 245389 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:39:12.762228 245389 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:39:13.262103 245389 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:39:13.762942 245389 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:39:14.262104 245389 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:39:14.761976 245389 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:39:13.876656 245575 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0915 20:39:13.876680 245575 addons.go:406] enableAddons completed in 1.002133551s
I0915 20:39:13.986739 245575 pod_ready.go:92] pod "etcd-pause-20210915203607-209669" in "kube-system" namespace has status "Ready":"True"
I0915 20:39:13.986756 245575 pod_ready.go:81] duration metric: took 398.045034ms waiting for pod "etcd-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:13.986765 245575 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:14.388481 245575 pod_ready.go:92] pod "kube-apiserver-pause-20210915203607-209669" in "kube-system" namespace has status "Ready":"True"
I0915 20:39:14.388508 245575 pod_ready.go:81] duration metric: took 401.735512ms waiting for pod "kube-apiserver-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:14.388526 245575 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:14.789659 245575 pod_ready.go:92] pod "kube-controller-manager-pause-20210915203607-209669" in "kube-system" namespace has status "Ready":"True"
I0915 20:39:14.789685 245575 pod_ready.go:81] duration metric: took 401.149421ms waiting for pod "kube-controller-manager-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:14.789701 245575 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-knsd4" in "kube-system" namespace to be "Ready" ...
I0915 20:39:15.187668 245575 pod_ready.go:92] pod "kube-proxy-knsd4" in "kube-system" namespace has status "Ready":"True"
I0915 20:39:15.187690 245575 pod_ready.go:81] duration metric: took 397.98111ms waiting for pod "kube-proxy-knsd4" in "kube-system" namespace to be "Ready" ...
I0915 20:39:15.187704 245575 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:15.589496 245575 pod_ready.go:92] pod "kube-scheduler-pause-20210915203607-209669" in "kube-system" namespace has status "Ready":"True"
I0915 20:39:15.589522 245575 pod_ready.go:81] duration metric: took 401.810283ms waiting for pod "kube-scheduler-pause-20210915203607-209669" in "kube-system" namespace to be "Ready" ...
I0915 20:39:15.589533 245575 pod_ready.go:38] duration metric: took 2.60289376s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0915 20:39:15.589552 245575 api_server.go:50] waiting for apiserver process to appear ...
I0915 20:39:15.589606 245575 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:39:15.607433 245575 api_server.go:70] duration metric: took 2.732939273s to wait for apiserver process to appear ...
I0915 20:39:15.607465 245575 api_server.go:86] waiting for apiserver healthz status ...
I0915 20:39:15.607479 245575 api_server.go:239] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
I0915 20:39:15.617761 245575 api_server.go:265] https://192.168.39.238:8443/healthz returned 200:
ok
I0915 20:39:15.620763 245575 api_server.go:139] control plane version: v1.22.1
I0915 20:39:15.620789 245575 api_server.go:129] duration metric: took 13.316097ms to wait for apiserver health ...
I0915 20:39:15.620807 245575 system_pods.go:43] waiting for kube-system pods to appear ...
I0915 20:39:15.792076 245575 system_pods.go:59] 7 kube-system pods found
I0915 20:39:15.792125 245575 system_pods.go:61] "coredns-78fcd69978-stp22" [3ad260f2-128d-46b8-9f0c-33929b1c2e24] Running
I0915 20:39:15.792133 245575 system_pods.go:61] "etcd-pause-20210915203607-209669" [ce205553-77a4-4249-9b12-a0dacdf44990] Running
I0915 20:39:15.792140 245575 system_pods.go:61] "kube-apiserver-pause-20210915203607-209669" [908c5f60-402f-4dd2-93b8-ba8a66e765e2] Running
I0915 20:39:15.792148 245575 system_pods.go:61] "kube-controller-manager-pause-20210915203607-209669" [31f973e9-2c44-4ac0-8559-ead277ece9ef] Running
I0915 20:39:15.792154 245575 system_pods.go:61] "kube-proxy-knsd4" [cd8c788e-8ca0-46be-be18-92e8ff747405] Running
I0915 20:39:15.792160 245575 system_pods.go:61] "kube-scheduler-pause-20210915203607-209669" [e607232d-37b4-470c-a390-e1e9139b5f13] Running
I0915 20:39:15.792170 245575 system_pods.go:61] "storage-provisioner" [42831a1f-205d-4116-b49c-dbc188c15aa2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0915 20:39:15.792182 245575 system_pods.go:74] duration metric: took 171.36896ms to wait for pod list to return data ...
I0915 20:39:15.792196 245575 default_sa.go:34] waiting for default service account to be created ...
I0915 20:39:15.990061 245575 default_sa.go:45] found service account: "default"
I0915 20:39:15.990089 245575 default_sa.go:55] duration metric: took 197.884088ms for default service account to be created ...
I0915 20:39:15.990103 245575 system_pods.go:116] waiting for k8s-apps to be running ...
I0915 20:39:16.191388 245575 system_pods.go:86] 7 kube-system pods found
I0915 20:39:16.191418 245575 system_pods.go:89] "coredns-78fcd69978-stp22" [3ad260f2-128d-46b8-9f0c-33929b1c2e24] Running
I0915 20:39:16.191425 245575 system_pods.go:89] "etcd-pause-20210915203607-209669" [ce205553-77a4-4249-9b12-a0dacdf44990] Running
I0915 20:39:16.191430 245575 system_pods.go:89] "kube-apiserver-pause-20210915203607-209669" [908c5f60-402f-4dd2-93b8-ba8a66e765e2] Running
I0915 20:39:16.191436 245575 system_pods.go:89] "kube-controller-manager-pause-20210915203607-209669" [31f973e9-2c44-4ac0-8559-ead277ece9ef] Running
I0915 20:39:16.191441 245575 system_pods.go:89] "kube-proxy-knsd4" [cd8c788e-8ca0-46be-be18-92e8ff747405] Running
I0915 20:39:16.191447 245575 system_pods.go:89] "kube-scheduler-pause-20210915203607-209669" [e607232d-37b4-470c-a390-e1e9139b5f13] Running
I0915 20:39:16.191452 245575 system_pods.go:89] "storage-provisioner" [42831a1f-205d-4116-b49c-dbc188c15aa2] Running
I0915 20:39:16.191463 245575 system_pods.go:126] duration metric: took 201.35161ms to wait for k8s-apps to be running ...
I0915 20:39:16.191487 245575 system_svc.go:44] waiting for kubelet service to be running ....
I0915 20:39:16.191550 245575 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
I0915 20:39:16.211671 245575 system_svc.go:56] duration metric: took 20.17233ms WaitForService to wait for kubelet.
I0915 20:39:16.211704 245575 kubeadm.go:547] duration metric: took 3.337218306s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0915 20:39:16.211753 245575 node_conditions.go:102] verifying NodePressure condition ...
I0915 20:39:16.386024 245575 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0915 20:39:16.386054 245575 node_conditions.go:123] node cpu capacity is 2
I0915 20:39:16.386068 245575 node_conditions.go:105] duration metric: took 174.305222ms to run NodePressure ...
I0915 20:39:16.386082 245575 start.go:231] waiting for startup goroutines ...
I0915 20:39:16.455483 245575 start.go:462] kubectl: 1.20.5, cluster: 1.22.1 (minor skew: 2)
I0915 20:39:16.457515 245575 out.go:177]
W0915 20:39:16.457720 245575 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.1.
I0915 20:39:16.459296 245575 out.go:177] - Want kubectl v1.22.1? Try 'minikube kubectl -- get pods -A'
I0915 20:39:16.460903 245575 out.go:177] * Done! kubectl is now configured to use "pause-20210915203607-209669" cluster and "default" namespace by default
I0915 20:39:12.139498 245528 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:39:12.639536 245528 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:39:13.139325 245528 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:39:13.638455 245528 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:39:14.138635 245528 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:39:14.639095 245528 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:39:15.138665 245528 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:39:15.639369 245528 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0915 20:39:15.652915 245528 api_server.go:70] duration metric: took 8.527427519s to wait for apiserver process to appear ...
I0915 20:39:15.652937 245528 api_server.go:86] waiting for apiserver healthz status ...
I0915 20:39:15.652946 245528 api_server.go:239] Checking apiserver healthz at https://192.168.61.101:8443/healthz ...
I0915 20:39:15.653609 245528 api_server.go:255] stopped: https://192.168.61.101:8443/healthz: Get "https://192.168.61.101:8443/healthz": dial tcp 192.168.61.101:8443: connect: connection refused
I0915 20:39:16.154776 245528 api_server.go:239] Checking apiserver healthz at https://192.168.61.101:8443/healthz ...
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
962ec633e8582 6e38f40d628db 2 seconds ago Running storage-provisioner 0 e320243000777
bb38e8ada4c4b 8d147537fb7d1 15 seconds ago Running coredns 1 70a1f8909af5f
b3d34c6906044 36c4ebbc9d979 17 seconds ago Running kube-proxy 2 a2f5a80436bd4
a28b014002fbe 0048118155842 25 seconds ago Running etcd 2 8784e2e213dd5
737d465cce63a 6e002eb89a881 25 seconds ago Running kube-controller-manager 2 cb4614124ceb5
54db2874191b5 f30469a2491a5 25 seconds ago Running kube-apiserver 2 c44810c9c64fb
7abb377ea5dc0 aca5ededae9c8 25 seconds ago Running kube-scheduler 2 afc461e14ebcd
70722936a0bd3 0048118155842 33 seconds ago Exited etcd 1 8784e2e213dd5
d1b803c24aa9a 6e002eb89a881 33 seconds ago Exited kube-controller-manager 1 cb4614124ceb5
d0a53108ac9c3 36c4ebbc9d979 34 seconds ago Exited kube-proxy 1 a2f5a80436bd4
85f3e86ba3483 f30469a2491a5 34 seconds ago Exited kube-apiserver 1 c44810c9c64fb
92765dba236c7 aca5ededae9c8 35 seconds ago Exited kube-scheduler 1 afc461e14ebcd
c90a36c8826cb 8d147537fb7d1 About a minute ago Exited coredns 0 006e8603867df
*
* ==> containerd <==
* -- Journal begins at Wed 2021-09-15 20:36:18 UTC, ends at Wed 2021-09-15 20:39:17 UTC. --
Sep 15 20:38:53 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:38:53.441386014Z" level=info msg="StartContainer for \"54db2874191b56ed57e53ad7df7c9d0aaa7745f86734b46c3406cfc34268c41c\" returns successfully"
Sep 15 20:38:53 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:38:53.828040772Z" level=info msg="StartContainer for \"a28b014002fbef2b2af4bfbeacb7e2322131f134f2269f5442818f0c28fa967c\" returns successfully"
Sep 15 20:38:59 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:38:59.414225522Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Sep 15 20:39:00 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:00.030614384Z" level=info msg="StopPodSandbox for \"006e8603867dffe63e26a166dae172791c6ea3d0404a8b21f75ad7ff42f43eb9\""
Sep 15 20:39:00 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:00.030907631Z" level=info msg="Container to stop \"c90a36c8826cb3a93ce5571908717dbce4d6e5ad1fa18ad1b8831160513af998\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Sep 15 20:39:00 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:00.092109097Z" level=info msg="TearDown network for sandbox \"006e8603867dffe63e26a166dae172791c6ea3d0404a8b21f75ad7ff42f43eb9\" successfully"
Sep 15 20:39:00 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:00.092326347Z" level=info msg="StopPodSandbox for \"006e8603867dffe63e26a166dae172791c6ea3d0404a8b21f75ad7ff42f43eb9\" returns successfully"
Sep 15 20:39:00 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:00.094099305Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-78fcd69978-stp22,Uid:3ad260f2-128d-46b8-9f0c-33929b1c2e24,Namespace:kube-system,Attempt:1,}"
Sep 15 20:39:00 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:00.334270717Z" level=info msg="CreateContainer within sandbox \"a2f5a80436bd4f2fa6397e200731cf807db5a71da8ca62c2c8de36e4b2acdbc9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:2,}"
Sep 15 20:39:00 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:00.536867132Z" level=info msg="CreateContainer within sandbox \"a2f5a80436bd4f2fa6397e200731cf807db5a71da8ca62c2c8de36e4b2acdbc9\" for &ContainerMetadata{Name:kube-proxy,Attempt:2,} returns container id \"b3d34c6906044e0bb2166190ddfa4f3ddd704379114a54aa1c3cc3ea0a72fe3e\""
Sep 15 20:39:00 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:00.537996578Z" level=info msg="StartContainer for \"b3d34c6906044e0bb2166190ddfa4f3ddd704379114a54aa1c3cc3ea0a72fe3e\""
Sep 15 20:39:00 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:00.586740004Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/70a1f8909af5f96588cd43d03815a63c3a8520eb16678f0322f13c2220d542b6 pid=4919
Sep 15 20:39:01 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:01.084090964Z" level=info msg="StartContainer for \"b3d34c6906044e0bb2166190ddfa4f3ddd704379114a54aa1c3cc3ea0a72fe3e\" returns successfully"
Sep 15 20:39:01 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:01.703268355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-78fcd69978-stp22,Uid:3ad260f2-128d-46b8-9f0c-33929b1c2e24,Namespace:kube-system,Attempt:1,} returns sandbox id \"70a1f8909af5f96588cd43d03815a63c3a8520eb16678f0322f13c2220d542b6\""
Sep 15 20:39:01 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:01.726782333Z" level=info msg="CreateContainer within sandbox \"70a1f8909af5f96588cd43d03815a63c3a8520eb16678f0322f13c2220d542b6\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
Sep 15 20:39:01 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:01.873187400Z" level=info msg="CreateContainer within sandbox \"70a1f8909af5f96588cd43d03815a63c3a8520eb16678f0322f13c2220d542b6\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"bb38e8ada4c4bc7e6ca55174a34039e53e45e5ecbb18e24cfb536ec6417d1983\""
Sep 15 20:39:01 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:01.876409632Z" level=info msg="StartContainer for \"bb38e8ada4c4bc7e6ca55174a34039e53e45e5ecbb18e24cfb536ec6417d1983\""
Sep 15 20:39:02 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:02.120561099Z" level=info msg="StartContainer for \"bb38e8ada4c4bc7e6ca55174a34039e53e45e5ecbb18e24cfb536ec6417d1983\" returns successfully"
Sep 15 20:39:14 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:14.195248887Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:42831a1f-205d-4116-b49c-dbc188c15aa2,Namespace:kube-system,Attempt:0,}"
Sep 15 20:39:14 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:14.251342761Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e320243000777a0a9f4d42a90ab32c4a77c76d42db64617eab427e00ea5da776 pid=5155
Sep 15 20:39:14 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:14.783726396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:42831a1f-205d-4116-b49c-dbc188c15aa2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e320243000777a0a9f4d42a90ab32c4a77c76d42db64617eab427e00ea5da776\""
Sep 15 20:39:14 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:14.803034065Z" level=info msg="CreateContainer within sandbox \"e320243000777a0a9f4d42a90ab32c4a77c76d42db64617eab427e00ea5da776\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
Sep 15 20:39:14 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:14.877736451Z" level=info msg="CreateContainer within sandbox \"e320243000777a0a9f4d42a90ab32c4a77c76d42db64617eab427e00ea5da776\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"962ec633e8582d72ec61c1683b04913be4c850a3e5327e410d6eb9c97293b4d5\""
Sep 15 20:39:14 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:14.881240855Z" level=info msg="StartContainer for \"962ec633e8582d72ec61c1683b04913be4c850a3e5327e410d6eb9c97293b4d5\""
Sep 15 20:39:15 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:15.059422195Z" level=info msg="StartContainer for \"962ec633e8582d72ec61c1683b04913be4c850a3e5327e410d6eb9c97293b4d5\" returns successfully"
*
* ==> coredns [bb38e8ada4c4bc7e6ca55174a34039e53e45e5ecbb18e24cfb536ec6417d1983] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
CoreDNS-1.8.4
linux/amd64, go1.16.4, 053c4d5
*
* ==> coredns [c90a36c8826cb3a93ce5571908717dbce4d6e5ad1fa18ad1b8831160513af998] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.8.4
linux/amd64, go1.16.4, 053c4d5
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] Reloading
[INFO] plugin/health: Going into lameduck mode for 5s
[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
[INFO] Reloading complete
*
* ==> describe nodes <==
* Name: pause-20210915203607-209669
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-20210915203607-209669
kubernetes.io/os=linux
minikube.k8s.io/commit=66748304c4ca78061b718f95ac626a53ac360876
minikube.k8s.io/name=pause-20210915203607-209669
minikube.k8s.io/updated_at=2021_09_15T20_37_15_0700
minikube.k8s.io/version=v1.23.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 15 Sep 2021 20:37:04 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-20210915203607-209669
AcquireTime: <unset>
RenewTime: Wed, 15 Sep 2021 20:39:09 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 15 Sep 2021 20:38:59 +0000 Wed, 15 Sep 2021 20:37:01 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 15 Sep 2021 20:38:59 +0000 Wed, 15 Sep 2021 20:37:01 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 15 Sep 2021 20:38:59 +0000 Wed, 15 Sep 2021 20:37:01 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 15 Sep 2021 20:38:59 +0000 Wed, 15 Sep 2021 20:37:25 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.238
Hostname: pause-20210915203607-209669
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2033056Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2033056Ki
pods: 110
System Info:
Machine ID: 3f6c3755f32145e496c3a3e709a32d14
System UUID: 3f6c3755-f321-45e4-96c3-a3e709a32d14
Boot ID: bb92b492-736c-4be6-bdbb-7e6dec0890e9
Kernel Version: 4.19.202
OS Image: Buildroot 2021.02.4
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.4.9
Kubelet Version: v1.22.1
Kube-Proxy Version: v1.22.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-78fcd69978-stp22 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 111s
kube-system etcd-pause-20210915203607-209669 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 2m8s
kube-system kube-apiserver-pause-20210915203607-209669 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 118s
kube-system kube-controller-manager-pause-20210915203607-209669 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m8s
kube-system kube-proxy-knsd4 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 111s
kube-system kube-scheduler-pause-20210915203607-209669 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 118s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 2m23s (x7 over 2m23s) kubelet Node pause-20210915203607-209669 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m23s (x7 over 2m23s) kubelet Node pause-20210915203607-209669 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m23s (x6 over 2m23s) kubelet Node pause-20210915203607-209669 status is now: NodeHasSufficientPID
Normal Starting 118s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 118s kubelet Node pause-20210915203607-209669 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 118s kubelet Node pause-20210915203607-209669 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 118s kubelet Node pause-20210915203607-209669 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 118s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 113s kubelet Node pause-20210915203607-209669 status is now: NodeReady
Normal Starting 27s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 27s (x8 over 27s) kubelet Node pause-20210915203607-209669 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 27s (x8 over 27s) kubelet Node pause-20210915203607-209669 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 27s (x7 over 27s) kubelet Node pause-20210915203607-209669 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 27s kubelet Updated Node Allocatable limit across pods
*
* ==> dmesg <==
* on the kernel command line
[ +0.000019] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.594958] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
[ +0.033987] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +1.159076] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1728 comm=systemd-network
[ +1.448665] vboxguest: loading out-of-tree module taints kernel.
[ +0.006696] vboxguest: PCI device not found, probably running on physical hardware.
[ +1.231177] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +19.133293] systemd-fstab-generator[2089]: Ignoring "noauto" for root device
[ +2.505137] systemd-fstab-generator[2120]: Ignoring "noauto" for root device
[ +0.131567] systemd-fstab-generator[2131]: Ignoring "noauto" for root device
[ +0.236080] systemd-fstab-generator[2159]: Ignoring "noauto" for root device
[ +5.570871] systemd-fstab-generator[2354]: Ignoring "noauto" for root device
[Sep15 20:37] systemd-fstab-generator[2762]: Ignoring "noauto" for root device
[ +14.436374] kauditd_printk_skb: 38 callbacks suppressed
[Sep15 20:38] kauditd_printk_skb: 128 callbacks suppressed
[ +18.554276] NFSD: Unable to end grace period: -110
[ +10.370085] systemd-fstab-generator[3652]: Ignoring "noauto" for root device
[ +0.282932] systemd-fstab-generator[3663]: Ignoring "noauto" for root device
[ +0.355879] systemd-fstab-generator[3686]: Ignoring "noauto" for root device
[ +5.795281] kauditd_printk_skb: 2 callbacks suppressed
[ +4.670441] systemd-fstab-generator[4592]: Ignoring "noauto" for root device
[ +14.086557] kauditd_printk_skb: 53 callbacks suppressed
[Sep15 20:39] kauditd_printk_skb: 23 callbacks suppressed
*
* ==> etcd [70722936a0bd3af13f288c55b295cc5bfa4175c7d7deffe8665d23628b4c56f4] <==
*
*
* ==> etcd [a28b014002fbef2b2af4bfbeacb7e2322131f134f2269f5442818f0c28fa967c] <==
* {"level":"info","ts":"2021-09-15T20:38:54.148Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"fff3906243738b90","initial-advertise-peer-urls":["https://192.168.39.238:2380"],"listen-peer-urls":["https://192.168.39.238:2380"],"advertise-client-urls":["https://192.168.39.238:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.238:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2021-09-15T20:38:54.149Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"fff3906243738b90","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
{"level":"info","ts":"2021-09-15T20:38:54.150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 switched to configuration voters=(18443243650725153680)"}
{"level":"info","ts":"2021-09-15T20:38:54.150Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.39.238:2380"}
{"level":"info","ts":"2021-09-15T20:38:54.150Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.39.238:2380"}
{"level":"info","ts":"2021-09-15T20:38:54.150Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"3658928c14b8a733","local-member-id":"fff3906243738b90","added-peer-id":"fff3906243738b90","added-peer-peer-urls":["https://192.168.39.238:2380"]}
{"level":"info","ts":"2021-09-15T20:38:54.151Z","caller":"membership/cluster.go:523","msg":"updated cluster version","cluster-id":"3658928c14b8a733","local-member-id":"fff3906243738b90","from":"3.5","to":"3.5"}
{"level":"info","ts":"2021-09-15T20:38:54.151Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2021-09-15T20:38:54.604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 is starting a new election at term 2"}
{"level":"info","ts":"2021-09-15T20:38:54.605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 became pre-candidate at term 2"}
{"level":"info","ts":"2021-09-15T20:38:54.605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 received MsgPreVoteResp from fff3906243738b90 at term 2"}
{"level":"info","ts":"2021-09-15T20:38:54.605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 became candidate at term 3"}
{"level":"info","ts":"2021-09-15T20:38:54.606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 received MsgVoteResp from fff3906243738b90 at term 3"}
{"level":"info","ts":"2021-09-15T20:38:54.606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 became leader at term 3"}
{"level":"info","ts":"2021-09-15T20:38:54.606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fff3906243738b90 elected leader fff3906243738b90 at term 3"}
{"level":"info","ts":"2021-09-15T20:38:54.607Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"fff3906243738b90","local-member-attributes":"{Name:pause-20210915203607-209669 ClientURLs:[https://192.168.39.238:2379]}","request-path":"/0/members/fff3906243738b90/attributes","cluster-id":"3658928c14b8a733","publish-timeout":"7s"}
{"level":"info","ts":"2021-09-15T20:38:54.608Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2021-09-15T20:38:54.615Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2021-09-15T20:38:54.625Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.238:2379"}
{"level":"info","ts":"2021-09-15T20:38:54.645Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2021-09-15T20:38:54.654Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
{"level":"info","ts":"2021-09-15T20:38:54.655Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
{"level":"info","ts":"2021-09-15T20:39:04.719Z","caller":"traceutil/trace.go:171","msg":"trace[1217229549] linearizableReadLoop","detail":"{readStateIndex:571; appliedIndex:571; }","duration":"218.600769ms","start":"2021-09-15T20:39:04.500Z","end":"2021-09-15T20:39:04.719Z","steps":["trace[1217229549] 'read index received' (duration: 218.577922ms)","trace[1217229549] 'applied index is now lower than readState.Index' (duration: 12.395µs)"],"step_count":2}
{"level":"warn","ts":"2021-09-15T20:39:04.721Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"221.139755ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/pause-20210915203607-209669.16a5195e005d87f1\" ","response":"range_response_count:1 size:703"}
{"level":"info","ts":"2021-09-15T20:39:04.722Z","caller":"traceutil/trace.go:171","msg":"trace[705427005] range","detail":"{range_begin:/registry/events/default/pause-20210915203607-209669.16a5195e005d87f1; range_end:; response_count:1; response_revision:533; }","duration":"221.439548ms","start":"2021-09-15T20:39:04.500Z","end":"2021-09-15T20:39:04.721Z","steps":["trace[705427005] 'agreement among raft nodes before linearized reading' (duration: 218.951244ms)"],"step_count":1}
*
* ==> kernel <==
* 20:39:18 up 3 min, 0 users, load average: 2.36, 1.10, 0.43
Linux pause-20210915203607-209669 4.19.202 #1 SMP Wed Sep 15 00:20:18 UTC 2021 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.4"
*
* ==> kube-apiserver [54db2874191b56ed57e53ad7df7c9d0aaa7745f86734b46c3406cfc34268c41c] <==
* I0915 20:38:59.235200 1 naming_controller.go:291] Starting NamingConditionController
I0915 20:38:59.235282 1 establishing_controller.go:76] Starting EstablishingController
I0915 20:38:59.235342 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0915 20:38:59.235402 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0915 20:38:59.235541 1 crd_finalizer.go:266] Starting CRDFinalizer
I0915 20:38:59.389953 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I0915 20:38:59.409416 1 shared_informer.go:247] Caches are synced for crd-autoregister
I0915 20:38:59.418701 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0915 20:38:59.414335 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0915 20:38:59.438774 1 cache.go:39] Caches are synced for autoregister controller
I0915 20:38:59.439677 1 shared_informer.go:247] Caches are synced for node_authorizer
I0915 20:38:59.440829 1 apf_controller.go:304] Running API Priority and Fairness config worker
I0915 20:38:59.443246 1 cache.go:39] Caches are synced for AvailableConditionController controller
E0915 20:38:59.501509 1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I0915 20:39:00.137877 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0915 20:39:00.138036 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0915 20:39:00.170581 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0915 20:39:01.892330 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0915 20:39:01.956210 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0915 20:39:02.147767 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0915 20:39:02.197046 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0915 20:39:02.211586 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0915 20:39:12.226834 1 controller.go:611] quota admission added evaluator for: endpoints
I0915 20:39:12.356356 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0915 20:39:13.900414 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
*
* ==> kube-apiserver [85f3e86ba3483821d43d8a9db6620b5b51676ceafad4f1b3d2647491f21f21b4] <==
*
*
* ==> kube-controller-manager [737d465cce63a89f9d78d41f4c6796d8153d72a3e002c66f3ebc1a39e10d1a6c] <==
* I0915 20:39:12.201767 1 shared_informer.go:247] Caches are synced for PVC protection
I0915 20:39:12.181167 1 shared_informer.go:247] Caches are synced for crt configmap
I0915 20:39:12.180203 1 shared_informer.go:247] Caches are synced for persistent volume
I0915 20:39:12.197949 1 shared_informer.go:247] Caches are synced for taint
I0915 20:39:12.204418 1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone:
W0915 20:39:12.205182 1 node_lifecycle_controller.go:1013] Missing timestamp for Node pause-20210915203607-209669. Assuming now as a timestamp.
I0915 20:39:12.205788 1 node_lifecycle_controller.go:1214] Controller detected that zone is now in state Normal.
I0915 20:39:12.198621 1 shared_informer.go:247] Caches are synced for job
I0915 20:39:12.208533 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
I0915 20:39:12.209231 1 event.go:291] "Event occurred" object="pause-20210915203607-209669" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20210915203607-209669 event: Registered Node pause-20210915203607-209669 in Controller"
I0915 20:39:12.210628 1 shared_informer.go:247] Caches are synced for TTL
I0915 20:39:12.213199 1 shared_informer.go:247] Caches are synced for daemon sets
I0915 20:39:12.213394 1 shared_informer.go:247] Caches are synced for stateful set
I0915 20:39:12.246424 1 shared_informer.go:247] Caches are synced for disruption
I0915 20:39:12.246761 1 disruption.go:371] Sending events to api server.
I0915 20:39:12.249390 1 shared_informer.go:247] Caches are synced for namespace
I0915 20:39:12.262679 1 shared_informer.go:247] Caches are synced for ReplicationController
I0915 20:39:12.266825 1 shared_informer.go:247] Caches are synced for service account
I0915 20:39:12.317274 1 shared_informer.go:247] Caches are synced for endpoint_slice
I0915 20:39:12.328413 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
I0915 20:39:12.360351 1 shared_informer.go:247] Caches are synced for resource quota
I0915 20:39:12.361936 1 shared_informer.go:247] Caches are synced for resource quota
I0915 20:39:12.774275 1 shared_informer.go:247] Caches are synced for garbage collector
I0915 20:39:12.790316 1 shared_informer.go:247] Caches are synced for garbage collector
I0915 20:39:12.790909 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-controller-manager [d1b803c24aa9a7625e950de523ac9acb1542a7ade553e3342ff89baeddea2b9a] <==
*
*
* ==> kube-proxy [b3d34c6906044e0bb2166190ddfa4f3ddd704379114a54aa1c3cc3ea0a72fe3e] <==
* I0915 20:39:01.023705 1 node.go:172] Successfully retrieved node IP: 192.168.39.238
I0915 20:39:01.023880 1 server_others.go:140] Detected node IP 192.168.39.238
W0915 20:39:01.023911 1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
W0915 20:39:01.124679 1 server_others.go:197] No iptables support for IPv6: exit status 3
I0915 20:39:01.124710 1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
I0915 20:39:01.124737 1 server_others.go:212] Using iptables Proxier.
I0915 20:39:01.125163 1 server.go:649] Version: v1.22.1
I0915 20:39:01.127379 1 config.go:224] Starting endpoint slice config controller
I0915 20:39:01.127403 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0915 20:39:01.127663 1 config.go:315] Starting service config controller
I0915 20:39:01.127675 1 shared_informer.go:240] Waiting for caches to sync for service config
E0915 20:39:01.143719 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pause-20210915203607-209669.16a519603d324931", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc048b45947817993, ext:272199124, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-pause-20210915203607-209669", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"pause
-20210915203607-209669", UID:"pause-20210915203607-209669", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "pause-20210915203607-209669.16a519603d324931" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
I0915 20:39:01.228746 1 shared_informer.go:247] Caches are synced for service config
I0915 20:39:01.229398 1 shared_informer.go:247] Caches are synced for endpoint slice config
*
* ==> kube-proxy [d0a53108ac9c335a7f9f23fe4fc0cacebb805ce2e8e234e45d65384428b0269c] <==
*
*
* ==> kube-scheduler [7abb377ea5dc05445a5bfd37455c97259a989e3a6eed46985b60a8a749e085d4] <==
* I0915 20:38:54.721417 1 serving.go:347] Generated self-signed cert in-memory
W0915 20:38:59.267308 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0915 20:38:59.267801 1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0915 20:38:59.270845 1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
W0915 20:38:59.271186 1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0915 20:38:59.397213 1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
I0915 20:38:59.397821 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0915 20:38:59.406066 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0915 20:38:59.397889 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
E0915 20:38:59.503593 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E0915 20:38:59.507979 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E0915 20:38:59.508615 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E0915 20:38:59.509061 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E0915 20:38:59.509275 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E0915 20:38:59.512010 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
I0915 20:38:59.611620 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [92765dba236c71d174f12db9fb303c5e8e28954e4784730673a1ba475c61e593] <==
*
*
* ==> kubelet <==
* -- Journal begins at Wed 2021-09-15 20:36:18 UTC, ends at Wed 2021-09-15 20:39:18 UTC. --
Sep 15 20:38:58 pause-20210915203607-209669 kubelet[4598]: E0915 20:38:58.802559 4598 kubelet.go:2407] "Error getting node" err="node \"pause-20210915203607-209669\" not found"
Sep 15 20:38:58 pause-20210915203607-209669 kubelet[4598]: E0915 20:38:58.903413 4598 kubelet.go:2407] "Error getting node" err="node \"pause-20210915203607-209669\" not found"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: E0915 20:38:59.004855 4598 kubelet.go:2407] "Error getting node" err="node \"pause-20210915203607-209669\" not found"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: E0915 20:38:59.105070 4598 kubelet.go:2407] "Error getting node" err="node \"pause-20210915203607-209669\" not found"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: E0915 20:38:59.208031 4598 kubelet.go:2407] "Error getting node" err="node \"pause-20210915203607-209669\" not found"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: E0915 20:38:59.309041 4598 kubelet.go:2407] "Error getting node" err="node \"pause-20210915203607-209669\" not found"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.376006 4598 apiserver.go:52] "Watching apiserver"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.410567 4598 kuberuntime_manager.go:1075] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.414953 4598 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.417339 4598 topology_manager.go:200] "Topology Admit Handler"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.417852 4598 topology_manager.go:200] "Topology Admit Handler"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.537710 4598 kubelet_node_status.go:109] "Node was previously registered" node="pause-20210915203607-209669"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.540044 4598 kubelet_node_status.go:74] "Successfully registered node" node="pause-20210915203607-209669"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.607697 4598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ad260f2-128d-46b8-9f0c-33929b1c2e24-config-volume\") pod \"coredns-78fcd69978-stp22\" (UID: \"3ad260f2-128d-46b8-9f0c-33929b1c2e24\") "
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.608076 4598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hrwv\" (UniqueName: \"kubernetes.io/projected/cd8c788e-8ca0-46be-be18-92e8ff747405-kube-api-access-7hrwv\") pod \"kube-proxy-knsd4\" (UID: \"cd8c788e-8ca0-46be-be18-92e8ff747405\") "
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.608276 4598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cd8c788e-8ca0-46be-be18-92e8ff747405-kube-proxy\") pod \"kube-proxy-knsd4\" (UID: \"cd8c788e-8ca0-46be-be18-92e8ff747405\") "
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.608425 4598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd8c788e-8ca0-46be-be18-92e8ff747405-xtables-lock\") pod \"kube-proxy-knsd4\" (UID: \"cd8c788e-8ca0-46be-be18-92e8ff747405\") "
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.608678 4598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd8c788e-8ca0-46be-be18-92e8ff747405-lib-modules\") pod \"kube-proxy-knsd4\" (UID: \"cd8c788e-8ca0-46be-be18-92e8ff747405\") "
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.608826 4598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbv97\" (UniqueName: \"kubernetes.io/projected/3ad260f2-128d-46b8-9f0c-33929b1c2e24-kube-api-access-sbv97\") pod \"coredns-78fcd69978-stp22\" (UID: \"3ad260f2-128d-46b8-9f0c-33929b1c2e24\") "
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.608957 4598 reconciler.go:157] "Reconciler: start to sync state"
Sep 15 20:39:00 pause-20210915203607-209669 kubelet[4598]: I0915 20:39:00.325089 4598 scope.go:110] "RemoveContainer" containerID="d0a53108ac9c335a7f9f23fe4fc0cacebb805ce2e8e234e45d65384428b0269c"
Sep 15 20:39:03 pause-20210915203607-209669 kubelet[4598]: I0915 20:39:03.821570 4598 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
Sep 15 20:39:13 pause-20210915203607-209669 kubelet[4598]: I0915 20:39:13.885835 4598 topology_manager.go:200] "Topology Admit Handler"
Sep 15 20:39:14 pause-20210915203607-209669 kubelet[4598]: I0915 20:39:14.050059 4598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/42831a1f-205d-4116-b49c-dbc188c15aa2-tmp\") pod \"storage-provisioner\" (UID: \"42831a1f-205d-4116-b49c-dbc188c15aa2\") "
Sep 15 20:39:14 pause-20210915203607-209669 kubelet[4598]: I0915 20:39:14.050401 4598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl7c6\" (UniqueName: \"kubernetes.io/projected/42831a1f-205d-4116-b49c-dbc188c15aa2-kube-api-access-gl7c6\") pod \"storage-provisioner\" (UID: \"42831a1f-205d-4116-b49c-dbc188c15aa2\") "
*
* ==> storage-provisioner [962ec633e8582d72ec61c1683b04913be4c850a3e5327e410d6eb9c97293b4d5] <==
* I0915 20:39:15.079902 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0915 20:39:15.099727 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0915 20:39:15.099977 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0915 20:39:15.124859 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0915 20:39:15.125647 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210915203607-209669_cd11252b-f232-4fcc-9951-7beddc5db04d!
I0915 20:39:15.126945 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"45f02105-eeeb-47da-97e2-fd20e8dca1a2", APIVersion:"v1", ResourceVersion:"588", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210915203607-209669_cd11252b-f232-4fcc-9951-7beddc5db04d became leader
I0915 20:39:15.242575 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210915203607-209669_cd11252b-f232-4fcc-9951-7beddc5db04d!
-- /stdout --
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210915203607-209669 -n pause-20210915203607-209669
helpers_test.go:262: (dbg) Run: kubectl --context pause-20210915203607-209669 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods:
helpers_test.go:273: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:276: (dbg) Run: kubectl --context pause-20210915203607-209669 describe pod
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context pause-20210915203607-209669 describe pod : exit status 1 (74.702807ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:278: kubectl --context pause-20210915203607-209669 describe pod : exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210915203607-209669 -n pause-20210915203607-209669
helpers_test.go:245: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p pause-20210915203607-209669 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p pause-20210915203607-209669 logs -n 25: (2.077960726s)
helpers_test.go:253: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
| start | -p | multinode-20210915200256-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:11:04 UTC | Wed, 15 Sep 2021 20:16:39 UTC |
| | multinode-20210915200256-209669 | | | | | |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| -p | multinode-20210915200256-209669 | multinode-20210915200256-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:16:39 UTC | Wed, 15 Sep 2021 20:16:40 UTC |
| | node delete m03 | | | | | |
| -p | multinode-20210915200256-209669 | multinode-20210915200256-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:16:41 UTC | Wed, 15 Sep 2021 20:19:45 UTC |
| | stop | | | | | |
| start | -p | multinode-20210915200256-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:19:46 UTC | Wed, 15 Sep 2021 20:23:09 UTC |
| | multinode-20210915200256-209669 | | | | | |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p | multinode-20210915200256-209669-m03 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:23:10 UTC | Wed, 15 Sep 2021 20:24:16 UTC |
| | multinode-20210915200256-209669-m03 | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | multinode-20210915200256-209669-m03 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:24:16 UTC | Wed, 15 Sep 2021 20:24:18 UTC |
| | multinode-20210915200256-209669-m03 | | | | | |
| delete | -p | multinode-20210915200256-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:24:18 UTC | Wed, 15 Sep 2021 20:24:20 UTC |
| | multinode-20210915200256-209669 | | | | | |
| start | -p | test-preload-20210915203106-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:31:06 UTC | Wed, 15 Sep 2021 20:33:20 UTC |
| | test-preload-20210915203106-209669 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.17.0 | | | | | |
| ssh | -p | test-preload-20210915203106-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:33:20 UTC | Wed, 15 Sep 2021 20:33:24 UTC |
| | test-preload-20210915203106-209669 | | | | | |
| | -- sudo crictl pull busybox | | | | | |
| start | -p | test-preload-20210915203106-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:33:24 UTC | Wed, 15 Sep 2021 20:34:20 UTC |
| | test-preload-20210915203106-209669 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --wait=true --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.17.3 | | | | | |
| ssh | -p | test-preload-20210915203106-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:34:20 UTC | Wed, 15 Sep 2021 20:34:20 UTC |
| | test-preload-20210915203106-209669 | | | | | |
| | -- sudo crictl image ls | | | | | |
| delete | -p | test-preload-20210915203106-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:34:20 UTC | Wed, 15 Sep 2021 20:34:21 UTC |
| | test-preload-20210915203106-209669 | | | | | |
| start | -p | scheduled-stop-20210915203421-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:34:21 UTC | Wed, 15 Sep 2021 20:35:28 UTC |
| | scheduled-stop-20210915203421-209669 | | | | | |
| | --memory=2048 --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| stop | -p | scheduled-stop-20210915203421-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:35:29 UTC | Wed, 15 Sep 2021 20:35:29 UTC |
| | scheduled-stop-20210915203421-209669 | | | | | |
| | --cancel-scheduled | | | | | |
| stop | -p | scheduled-stop-20210915203421-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:35:41 UTC | Wed, 15 Sep 2021 20:35:48 UTC |
| | scheduled-stop-20210915203421-209669 | | | | | |
| | --schedule 5s | | | | | |
| delete | -p | scheduled-stop-20210915203421-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:36:06 UTC | Wed, 15 Sep 2021 20:36:07 UTC |
| | scheduled-stop-20210915203421-209669 | | | | | |
| start | -p | kubernetes-upgrade-20210915203607-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:36:07 UTC | Wed, 15 Sep 2021 20:37:52 UTC |
| | kubernetes-upgrade-20210915203607-209669 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.14.0 | | | | | |
| | --alsologtostderr -v=1 --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| stop | -p | kubernetes-upgrade-20210915203607-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:37:52 UTC | Wed, 15 Sep 2021 20:37:55 UTC |
| | kubernetes-upgrade-20210915203607-209669 | | | | | |
| start | -p pause-20210915203607-209669 | pause-20210915203607-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:36:07 UTC | Wed, 15 Sep 2021 20:38:07 UTC |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=all --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p | offline-containerd-20210915203607-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:36:07 UTC | Wed, 15 Sep 2021 20:39:12 UTC |
| | offline-containerd-20210915203607-209669 | | | | | |
| | --alsologtostderr -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | offline-containerd-20210915203607-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:39:12 UTC | Wed, 15 Sep 2021 20:39:13 UTC |
| | offline-containerd-20210915203607-209669 | | | | | |
| delete | -p | kubenet-20210915203913-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:39:13 UTC | Wed, 15 Sep 2021 20:39:13 UTC |
| | kubenet-20210915203913-209669 | | | | | |
| delete | -p false-20210915203913-209669 | false-20210915203913-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:39:14 UTC | Wed, 15 Sep 2021 20:39:14 UTC |
| start | -p pause-20210915203607-209669 | pause-20210915203607-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:38:07 UTC | Wed, 15 Sep 2021 20:39:16 UTC |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| -p | pause-20210915203607-209669 | pause-20210915203607-209669 | jenkins | v1.23.0 | Wed, 15 Sep 2021 20:39:16 UTC | Wed, 15 Sep 2021 20:39:18 UTC |
| | logs -n 25 | | | | | |
|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
*
* ==> Last Start <==
* Log file created at: 2021/09/15 20:39:18
Running on machine: debian-jenkins-agent-8
Binary: Built with gc go1.17 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0915 20:39:18.100544 246243 out.go:298] Setting OutFile to fd 1 ...
I0915 20:39:18.100788 246243 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0915 20:39:18.100802 246243 out.go:311] Setting ErrFile to fd 2...
I0915 20:39:18.100809 246243 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0915 20:39:18.100956 246243 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/bin
I0915 20:39:18.101334 246243 out.go:305] Setting JSON to false
I0915 20:39:18.157941 246243 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-8","uptime":19321,"bootTime":1631719038,"procs":191,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
I0915 20:39:18.158066 246243 start.go:121] virtualization: kvm guest
I0915 20:39:18.160296 246243 out.go:177] * [force-systemd-env-20210915203918-209669] minikube v1.23.0 on Debian 9.13 (kvm/amd64)
I0915 20:39:18.162228 246243 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/kubeconfig
I0915 20:39:18.160451 246243 notify.go:169] Checking for updates...
I0915 20:39:18.164124 246243 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0915 20:39:18.165645 246243 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube
I0915 20:39:18.167028 246243 out.go:177] - MINIKUBE_LOCATION=12425
I0915 20:39:18.168602 246243 out.go:177] - MINIKUBE_FORCE_SYSTEMD=true
I0915 20:39:18.169165 246243 config.go:177] Loaded profile config "kubernetes-upgrade-20210915203607-209669": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.2-rc.0
I0915 20:39:18.169274 246243 config.go:177] Loaded profile config "pause-20210915203607-209669": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.1
I0915 20:39:18.169342 246243 config.go:177] Loaded profile config "stopped-upgrade-20210915203607-209669": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0915 20:39:18.169387 246243 driver.go:343] Setting default libvirt URI to qemu:///system
I0915 20:39:18.207702 246243 out.go:177] * Using the kvm2 driver based on user configuration
I0915 20:39:18.207737 246243 start.go:278] selected driver: kvm2
I0915 20:39:18.207744 246243 start.go:751] validating driver "kvm2" against <nil>
I0915 20:39:18.207768 246243 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0915 20:39:18.209113 246243 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0915 20:39:18.209329 246243 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0915 20:39:18.225129 246243 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.23.0
I0915 20:39:18.225209 246243 start_flags.go:264] no existing cluster config was found, will generate one from the flags
I0915 20:39:18.225387 246243 start_flags.go:719] Wait components to verify : map[apiserver:true system_pods:true]
I0915 20:39:18.225428 246243 cni.go:93] Creating CNI manager for ""
I0915 20:39:18.225446 246243 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
I0915 20:39:18.225453 246243 start_flags.go:273] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0915 20:39:18.225468 246243 start_flags.go:278] config:
{Name:force-systemd-env-20210915203918-209669 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.1 ClusterName:force-systemd-env-20210915203918-209669 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0915 20:39:18.225640 246243 iso.go:123] acquiring lock: {Name:mk297a0af7a5c0740af600c0c91a5b7e9ddafd38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0915 20:39:18.228445 246243 out.go:177] * Starting control plane node force-systemd-env-20210915203918-209669 in cluster force-systemd-env-20210915203918-209669
I0915 20:39:18.228471 246243 preload.go:131] Checking if preload exists for k8s version v1.22.1 and runtime containerd
I0915 20:39:18.228539 246243 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.22.1-containerd-overlay2-amd64.tar.lz4
I0915 20:39:18.228570 246243 cache.go:57] Caching tarball of preloaded images
I0915 20:39:18.228748 246243 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v12-v1.22.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0915 20:39:18.228770 246243 cache.go:60] Finished verifying existence of preloaded tar for v1.22.1 on containerd
I0915 20:39:18.228893 246243 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/force-systemd-env-20210915203918-209669/config.json ...
I0915 20:39:18.228922 246243 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12425-205797-07c6aa0a52dfb95e89e99689c8f3f45bf5722157/.minikube/profiles/force-systemd-env-20210915203918-209669/config.json: {Name:mka1cfede79c984fe63492576db5c27cd739d3d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0915 20:39:18.229149 246243 cache.go:206] Successfully downloaded all kic artifacts
I0915 20:39:18.229183 246243 start.go:313] acquiring machines lock for force-systemd-env-20210915203918-209669: {Name:mk02ff60ae5e10e39476a23d3a5c6dd42c42335e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0915 20:39:18.229241 246243 start.go:317] acquired machines lock for "force-systemd-env-20210915203918-209669" in 41.024µs
I0915 20:39:18.229271 246243 start.go:89] Provisioning new machine with config: &{Name:force-systemd-env-20210915203918-209669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12425/minikube-v1.23.0-1631662909-12425.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.26-1631295795-12425@sha256:7d61c0b6cf6832c8015ada78640635c5ab74b72f12f51bcc4c7660b0be01af56 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.22.1 ClusterName:force-systemd-env-20210915203918-209669 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.22.1 ControlPlane:true Worker:true}
I0915 20:39:18.229340 246243 start.go:126] createHost starting for "" (driver="kvm2")
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
962ec633e8582 6e38f40d628db 5 seconds ago Running storage-provisioner 0 e320243000777
bb38e8ada4c4b 8d147537fb7d1 18 seconds ago Running coredns 1 70a1f8909af5f
b3d34c6906044 36c4ebbc9d979 20 seconds ago Running kube-proxy 2 a2f5a80436bd4
a28b014002fbe 0048118155842 28 seconds ago Running etcd 2 8784e2e213dd5
737d465cce63a 6e002eb89a881 28 seconds ago Running kube-controller-manager 2 cb4614124ceb5
54db2874191b5 f30469a2491a5 28 seconds ago Running kube-apiserver 2 c44810c9c64fb
7abb377ea5dc0 aca5ededae9c8 28 seconds ago Running kube-scheduler 2 afc461e14ebcd
70722936a0bd3 0048118155842 36 seconds ago Exited etcd 1 8784e2e213dd5
d1b803c24aa9a 6e002eb89a881 36 seconds ago Exited kube-controller-manager 1 cb4614124ceb5
d0a53108ac9c3 36c4ebbc9d979 37 seconds ago Exited kube-proxy 1 a2f5a80436bd4
85f3e86ba3483 f30469a2491a5 37 seconds ago Exited kube-apiserver 1 c44810c9c64fb
92765dba236c7 aca5ededae9c8 38 seconds ago Exited kube-scheduler 1 afc461e14ebcd
c90a36c8826cb 8d147537fb7d1 About a minute ago Exited coredns 0 006e8603867df
*
* ==> containerd <==
* -- Journal begins at Wed 2021-09-15 20:36:18 UTC, ends at Wed 2021-09-15 20:39:20 UTC. --
Sep 15 20:38:53 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:38:53.441386014Z" level=info msg="StartContainer for \"54db2874191b56ed57e53ad7df7c9d0aaa7745f86734b46c3406cfc34268c41c\" returns successfully"
Sep 15 20:38:53 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:38:53.828040772Z" level=info msg="StartContainer for \"a28b014002fbef2b2af4bfbeacb7e2322131f134f2269f5442818f0c28fa967c\" returns successfully"
Sep 15 20:38:59 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:38:59.414225522Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Sep 15 20:39:00 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:00.030614384Z" level=info msg="StopPodSandbox for \"006e8603867dffe63e26a166dae172791c6ea3d0404a8b21f75ad7ff42f43eb9\""
Sep 15 20:39:00 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:00.030907631Z" level=info msg="Container to stop \"c90a36c8826cb3a93ce5571908717dbce4d6e5ad1fa18ad1b8831160513af998\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Sep 15 20:39:00 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:00.092109097Z" level=info msg="TearDown network for sandbox \"006e8603867dffe63e26a166dae172791c6ea3d0404a8b21f75ad7ff42f43eb9\" successfully"
Sep 15 20:39:00 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:00.092326347Z" level=info msg="StopPodSandbox for \"006e8603867dffe63e26a166dae172791c6ea3d0404a8b21f75ad7ff42f43eb9\" returns successfully"
Sep 15 20:39:00 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:00.094099305Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-78fcd69978-stp22,Uid:3ad260f2-128d-46b8-9f0c-33929b1c2e24,Namespace:kube-system,Attempt:1,}"
Sep 15 20:39:00 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:00.334270717Z" level=info msg="CreateContainer within sandbox \"a2f5a80436bd4f2fa6397e200731cf807db5a71da8ca62c2c8de36e4b2acdbc9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:2,}"
Sep 15 20:39:00 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:00.536867132Z" level=info msg="CreateContainer within sandbox \"a2f5a80436bd4f2fa6397e200731cf807db5a71da8ca62c2c8de36e4b2acdbc9\" for &ContainerMetadata{Name:kube-proxy,Attempt:2,} returns container id \"b3d34c6906044e0bb2166190ddfa4f3ddd704379114a54aa1c3cc3ea0a72fe3e\""
Sep 15 20:39:00 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:00.537996578Z" level=info msg="StartContainer for \"b3d34c6906044e0bb2166190ddfa4f3ddd704379114a54aa1c3cc3ea0a72fe3e\""
Sep 15 20:39:00 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:00.586740004Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/70a1f8909af5f96588cd43d03815a63c3a8520eb16678f0322f13c2220d542b6 pid=4919
Sep 15 20:39:01 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:01.084090964Z" level=info msg="StartContainer for \"b3d34c6906044e0bb2166190ddfa4f3ddd704379114a54aa1c3cc3ea0a72fe3e\" returns successfully"
Sep 15 20:39:01 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:01.703268355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-78fcd69978-stp22,Uid:3ad260f2-128d-46b8-9f0c-33929b1c2e24,Namespace:kube-system,Attempt:1,} returns sandbox id \"70a1f8909af5f96588cd43d03815a63c3a8520eb16678f0322f13c2220d542b6\""
Sep 15 20:39:01 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:01.726782333Z" level=info msg="CreateContainer within sandbox \"70a1f8909af5f96588cd43d03815a63c3a8520eb16678f0322f13c2220d542b6\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
Sep 15 20:39:01 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:01.873187400Z" level=info msg="CreateContainer within sandbox \"70a1f8909af5f96588cd43d03815a63c3a8520eb16678f0322f13c2220d542b6\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"bb38e8ada4c4bc7e6ca55174a34039e53e45e5ecbb18e24cfb536ec6417d1983\""
Sep 15 20:39:01 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:01.876409632Z" level=info msg="StartContainer for \"bb38e8ada4c4bc7e6ca55174a34039e53e45e5ecbb18e24cfb536ec6417d1983\""
Sep 15 20:39:02 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:02.120561099Z" level=info msg="StartContainer for \"bb38e8ada4c4bc7e6ca55174a34039e53e45e5ecbb18e24cfb536ec6417d1983\" returns successfully"
Sep 15 20:39:14 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:14.195248887Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:42831a1f-205d-4116-b49c-dbc188c15aa2,Namespace:kube-system,Attempt:0,}"
Sep 15 20:39:14 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:14.251342761Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e320243000777a0a9f4d42a90ab32c4a77c76d42db64617eab427e00ea5da776 pid=5155
Sep 15 20:39:14 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:14.783726396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:42831a1f-205d-4116-b49c-dbc188c15aa2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e320243000777a0a9f4d42a90ab32c4a77c76d42db64617eab427e00ea5da776\""
Sep 15 20:39:14 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:14.803034065Z" level=info msg="CreateContainer within sandbox \"e320243000777a0a9f4d42a90ab32c4a77c76d42db64617eab427e00ea5da776\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
Sep 15 20:39:14 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:14.877736451Z" level=info msg="CreateContainer within sandbox \"e320243000777a0a9f4d42a90ab32c4a77c76d42db64617eab427e00ea5da776\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"962ec633e8582d72ec61c1683b04913be4c850a3e5327e410d6eb9c97293b4d5\""
Sep 15 20:39:14 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:14.881240855Z" level=info msg="StartContainer for \"962ec633e8582d72ec61c1683b04913be4c850a3e5327e410d6eb9c97293b4d5\""
Sep 15 20:39:15 pause-20210915203607-209669 containerd[3694]: time="2021-09-15T20:39:15.059422195Z" level=info msg="StartContainer for \"962ec633e8582d72ec61c1683b04913be4c850a3e5327e410d6eb9c97293b4d5\" returns successfully"
*
* ==> coredns [bb38e8ada4c4bc7e6ca55174a34039e53e45e5ecbb18e24cfb536ec6417d1983] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
CoreDNS-1.8.4
linux/amd64, go1.16.4, 053c4d5
*
* ==> coredns [c90a36c8826cb3a93ce5571908717dbce4d6e5ad1fa18ad1b8831160513af998] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.8.4
linux/amd64, go1.16.4, 053c4d5
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] Reloading
[INFO] plugin/health: Going into lameduck mode for 5s
[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
[INFO] Reloading complete
*
* ==> describe nodes <==
* Name: pause-20210915203607-209669
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-20210915203607-209669
kubernetes.io/os=linux
minikube.k8s.io/commit=66748304c4ca78061b718f95ac626a53ac360876
minikube.k8s.io/name=pause-20210915203607-209669
minikube.k8s.io/updated_at=2021_09_15T20_37_15_0700
minikube.k8s.io/version=v1.23.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 15 Sep 2021 20:37:04 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-20210915203607-209669
AcquireTime: <unset>
RenewTime: Wed, 15 Sep 2021 20:39:19 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 15 Sep 2021 20:38:59 +0000 Wed, 15 Sep 2021 20:37:01 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 15 Sep 2021 20:38:59 +0000 Wed, 15 Sep 2021 20:37:01 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 15 Sep 2021 20:38:59 +0000 Wed, 15 Sep 2021 20:37:01 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 15 Sep 2021 20:38:59 +0000 Wed, 15 Sep 2021 20:37:25 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.238
Hostname: pause-20210915203607-209669
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2033056Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2033056Ki
pods: 110
System Info:
Machine ID: 3f6c3755f32145e496c3a3e709a32d14
System UUID: 3f6c3755-f321-45e4-96c3-a3e709a32d14
Boot ID: bb92b492-736c-4be6-bdbb-7e6dec0890e9
Kernel Version: 4.19.202
OS Image: Buildroot 2021.02.4
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.4.9
Kubelet Version: v1.22.1
Kube-Proxy Version: v1.22.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-78fcd69978-stp22 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 114s
kube-system etcd-pause-20210915203607-209669 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 2m11s
kube-system kube-apiserver-pause-20210915203607-209669 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m1s
kube-system kube-controller-manager-pause-20210915203607-209669 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m11s
kube-system kube-proxy-knsd4 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 114s
kube-system kube-scheduler-pause-20210915203607-209669 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m1s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 2m26s (x7 over 2m26s) kubelet Node pause-20210915203607-209669 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m26s (x7 over 2m26s) kubelet Node pause-20210915203607-209669 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m26s (x6 over 2m26s) kubelet Node pause-20210915203607-209669 status is now: NodeHasSufficientPID
Normal Starting 2m1s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m1s kubelet Node pause-20210915203607-209669 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m1s kubelet Node pause-20210915203607-209669 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m1s kubelet Node pause-20210915203607-209669 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m1s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 116s kubelet Node pause-20210915203607-209669 status is now: NodeReady
Normal Starting 30s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 30s (x8 over 30s) kubelet Node pause-20210915203607-209669 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 30s (x8 over 30s) kubelet Node pause-20210915203607-209669 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 30s (x7 over 30s) kubelet Node pause-20210915203607-209669 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 30s kubelet Updated Node Allocatable limit across pods
*
* ==> dmesg <==
* on the kernel command line
[ +0.000019] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.594958] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
[ +0.033987] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +1.159076] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1728 comm=systemd-network
[ +1.448665] vboxguest: loading out-of-tree module taints kernel.
[ +0.006696] vboxguest: PCI device not found, probably running on physical hardware.
[ +1.231177] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +19.133293] systemd-fstab-generator[2089]: Ignoring "noauto" for root device
[ +2.505137] systemd-fstab-generator[2120]: Ignoring "noauto" for root device
[ +0.131567] systemd-fstab-generator[2131]: Ignoring "noauto" for root device
[ +0.236080] systemd-fstab-generator[2159]: Ignoring "noauto" for root device
[ +5.570871] systemd-fstab-generator[2354]: Ignoring "noauto" for root device
[Sep15 20:37] systemd-fstab-generator[2762]: Ignoring "noauto" for root device
[ +14.436374] kauditd_printk_skb: 38 callbacks suppressed
[Sep15 20:38] kauditd_printk_skb: 128 callbacks suppressed
[ +18.554276] NFSD: Unable to end grace period: -110
[ +10.370085] systemd-fstab-generator[3652]: Ignoring "noauto" for root device
[ +0.282932] systemd-fstab-generator[3663]: Ignoring "noauto" for root device
[ +0.355879] systemd-fstab-generator[3686]: Ignoring "noauto" for root device
[ +5.795281] kauditd_printk_skb: 2 callbacks suppressed
[ +4.670441] systemd-fstab-generator[4592]: Ignoring "noauto" for root device
[ +14.086557] kauditd_printk_skb: 53 callbacks suppressed
[Sep15 20:39] kauditd_printk_skb: 23 callbacks suppressed
*
* ==> etcd [70722936a0bd3af13f288c55b295cc5bfa4175c7d7deffe8665d23628b4c56f4] <==
*
*
* ==> etcd [a28b014002fbef2b2af4bfbeacb7e2322131f134f2269f5442818f0c28fa967c] <==
* {"level":"info","ts":"2021-09-15T20:38:54.148Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"fff3906243738b90","initial-advertise-peer-urls":["https://192.168.39.238:2380"],"listen-peer-urls":["https://192.168.39.238:2380"],"advertise-client-urls":["https://192.168.39.238:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.238:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2021-09-15T20:38:54.149Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"fff3906243738b90","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
{"level":"info","ts":"2021-09-15T20:38:54.150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 switched to configuration voters=(18443243650725153680)"}
{"level":"info","ts":"2021-09-15T20:38:54.150Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.39.238:2380"}
{"level":"info","ts":"2021-09-15T20:38:54.150Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.39.238:2380"}
{"level":"info","ts":"2021-09-15T20:38:54.150Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"3658928c14b8a733","local-member-id":"fff3906243738b90","added-peer-id":"fff3906243738b90","added-peer-peer-urls":["https://192.168.39.238:2380"]}
{"level":"info","ts":"2021-09-15T20:38:54.151Z","caller":"membership/cluster.go:523","msg":"updated cluster version","cluster-id":"3658928c14b8a733","local-member-id":"fff3906243738b90","from":"3.5","to":"3.5"}
{"level":"info","ts":"2021-09-15T20:38:54.151Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2021-09-15T20:38:54.604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 is starting a new election at term 2"}
{"level":"info","ts":"2021-09-15T20:38:54.605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 became pre-candidate at term 2"}
{"level":"info","ts":"2021-09-15T20:38:54.605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 received MsgPreVoteResp from fff3906243738b90 at term 2"}
{"level":"info","ts":"2021-09-15T20:38:54.605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 became candidate at term 3"}
{"level":"info","ts":"2021-09-15T20:38:54.606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 received MsgVoteResp from fff3906243738b90 at term 3"}
{"level":"info","ts":"2021-09-15T20:38:54.606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 became leader at term 3"}
{"level":"info","ts":"2021-09-15T20:38:54.606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fff3906243738b90 elected leader fff3906243738b90 at term 3"}
{"level":"info","ts":"2021-09-15T20:38:54.607Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"fff3906243738b90","local-member-attributes":"{Name:pause-20210915203607-209669 ClientURLs:[https://192.168.39.238:2379]}","request-path":"/0/members/fff3906243738b90/attributes","cluster-id":"3658928c14b8a733","publish-timeout":"7s"}
{"level":"info","ts":"2021-09-15T20:38:54.608Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2021-09-15T20:38:54.615Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2021-09-15T20:38:54.625Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.238:2379"}
{"level":"info","ts":"2021-09-15T20:38:54.645Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2021-09-15T20:38:54.654Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
{"level":"info","ts":"2021-09-15T20:38:54.655Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
{"level":"info","ts":"2021-09-15T20:39:04.719Z","caller":"traceutil/trace.go:171","msg":"trace[1217229549] linearizableReadLoop","detail":"{readStateIndex:571; appliedIndex:571; }","duration":"218.600769ms","start":"2021-09-15T20:39:04.500Z","end":"2021-09-15T20:39:04.719Z","steps":["trace[1217229549] 'read index received' (duration: 218.577922ms)","trace[1217229549] 'applied index is now lower than readState.Index' (duration: 12.395µs)"],"step_count":2}
{"level":"warn","ts":"2021-09-15T20:39:04.721Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"221.139755ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/pause-20210915203607-209669.16a5195e005d87f1\" ","response":"range_response_count:1 size:703"}
{"level":"info","ts":"2021-09-15T20:39:04.722Z","caller":"traceutil/trace.go:171","msg":"trace[705427005] range","detail":"{range_begin:/registry/events/default/pause-20210915203607-209669.16a5195e005d87f1; range_end:; response_count:1; response_revision:533; }","duration":"221.439548ms","start":"2021-09-15T20:39:04.500Z","end":"2021-09-15T20:39:04.721Z","steps":["trace[705427005] 'agreement among raft nodes before linearized reading' (duration: 218.951244ms)"],"step_count":1}
*
* ==> kernel <==
* 20:39:21 up 3 min, 0 users, load average: 2.41, 1.13, 0.44
Linux pause-20210915203607-209669 4.19.202 #1 SMP Wed Sep 15 00:20:18 UTC 2021 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.4"
*
* ==> kube-apiserver [54db2874191b56ed57e53ad7df7c9d0aaa7745f86734b46c3406cfc34268c41c] <==
* I0915 20:38:59.235200 1 naming_controller.go:291] Starting NamingConditionController
I0915 20:38:59.235282 1 establishing_controller.go:76] Starting EstablishingController
I0915 20:38:59.235342 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0915 20:38:59.235402 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0915 20:38:59.235541 1 crd_finalizer.go:266] Starting CRDFinalizer
I0915 20:38:59.389953 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I0915 20:38:59.409416 1 shared_informer.go:247] Caches are synced for crd-autoregister
I0915 20:38:59.418701 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0915 20:38:59.414335 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0915 20:38:59.438774 1 cache.go:39] Caches are synced for autoregister controller
I0915 20:38:59.439677 1 shared_informer.go:247] Caches are synced for node_authorizer
I0915 20:38:59.440829 1 apf_controller.go:304] Running API Priority and Fairness config worker
I0915 20:38:59.443246 1 cache.go:39] Caches are synced for AvailableConditionController controller
E0915 20:38:59.501509 1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I0915 20:39:00.137877 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0915 20:39:00.138036 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0915 20:39:00.170581 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0915 20:39:01.892330 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0915 20:39:01.956210 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0915 20:39:02.147767 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0915 20:39:02.197046 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0915 20:39:02.211586 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0915 20:39:12.226834 1 controller.go:611] quota admission added evaluator for: endpoints
I0915 20:39:12.356356 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0915 20:39:13.900414 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
*
* ==> kube-apiserver [85f3e86ba3483821d43d8a9db6620b5b51676ceafad4f1b3d2647491f21f21b4] <==
*
*
* ==> kube-controller-manager [737d465cce63a89f9d78d41f4c6796d8153d72a3e002c66f3ebc1a39e10d1a6c] <==
* I0915 20:39:12.201767 1 shared_informer.go:247] Caches are synced for PVC protection
I0915 20:39:12.181167 1 shared_informer.go:247] Caches are synced for crt configmap
I0915 20:39:12.180203 1 shared_informer.go:247] Caches are synced for persistent volume
I0915 20:39:12.197949 1 shared_informer.go:247] Caches are synced for taint
I0915 20:39:12.204418 1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone:
W0915 20:39:12.205182 1 node_lifecycle_controller.go:1013] Missing timestamp for Node pause-20210915203607-209669. Assuming now as a timestamp.
I0915 20:39:12.205788 1 node_lifecycle_controller.go:1214] Controller detected that zone is now in state Normal.
I0915 20:39:12.198621 1 shared_informer.go:247] Caches are synced for job
I0915 20:39:12.208533 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
I0915 20:39:12.209231 1 event.go:291] "Event occurred" object="pause-20210915203607-209669" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20210915203607-209669 event: Registered Node pause-20210915203607-209669 in Controller"
I0915 20:39:12.210628 1 shared_informer.go:247] Caches are synced for TTL
I0915 20:39:12.213199 1 shared_informer.go:247] Caches are synced for daemon sets
I0915 20:39:12.213394 1 shared_informer.go:247] Caches are synced for stateful set
I0915 20:39:12.246424 1 shared_informer.go:247] Caches are synced for disruption
I0915 20:39:12.246761 1 disruption.go:371] Sending events to api server.
I0915 20:39:12.249390 1 shared_informer.go:247] Caches are synced for namespace
I0915 20:39:12.262679 1 shared_informer.go:247] Caches are synced for ReplicationController
I0915 20:39:12.266825 1 shared_informer.go:247] Caches are synced for service account
I0915 20:39:12.317274 1 shared_informer.go:247] Caches are synced for endpoint_slice
I0915 20:39:12.328413 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
I0915 20:39:12.360351 1 shared_informer.go:247] Caches are synced for resource quota
I0915 20:39:12.361936 1 shared_informer.go:247] Caches are synced for resource quota
I0915 20:39:12.774275 1 shared_informer.go:247] Caches are synced for garbage collector
I0915 20:39:12.790316 1 shared_informer.go:247] Caches are synced for garbage collector
I0915 20:39:12.790909 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-controller-manager [d1b803c24aa9a7625e950de523ac9acb1542a7ade553e3342ff89baeddea2b9a] <==
*
*
* ==> kube-proxy [b3d34c6906044e0bb2166190ddfa4f3ddd704379114a54aa1c3cc3ea0a72fe3e] <==
* I0915 20:39:01.023705 1 node.go:172] Successfully retrieved node IP: 192.168.39.238
I0915 20:39:01.023880 1 server_others.go:140] Detected node IP 192.168.39.238
W0915 20:39:01.023911 1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
W0915 20:39:01.124679 1 server_others.go:197] No iptables support for IPv6: exit status 3
I0915 20:39:01.124710 1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
I0915 20:39:01.124737 1 server_others.go:212] Using iptables Proxier.
I0915 20:39:01.125163 1 server.go:649] Version: v1.22.1
I0915 20:39:01.127379 1 config.go:224] Starting endpoint slice config controller
I0915 20:39:01.127403 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0915 20:39:01.127663 1 config.go:315] Starting service config controller
I0915 20:39:01.127675 1 shared_informer.go:240] Waiting for caches to sync for service config
E0915 20:39:01.143719 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pause-20210915203607-209669.16a519603d324931", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc048b45947817993, ext:272199124, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-pause-20210915203607-209669", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"pause
-20210915203607-209669", UID:"pause-20210915203607-209669", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "pause-20210915203607-209669.16a519603d324931" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
I0915 20:39:01.228746 1 shared_informer.go:247] Caches are synced for service config
I0915 20:39:01.229398 1 shared_informer.go:247] Caches are synced for endpoint slice config
*
* ==> kube-proxy [d0a53108ac9c335a7f9f23fe4fc0cacebb805ce2e8e234e45d65384428b0269c] <==
*
*
* ==> kube-scheduler [7abb377ea5dc05445a5bfd37455c97259a989e3a6eed46985b60a8a749e085d4] <==
* I0915 20:38:54.721417 1 serving.go:347] Generated self-signed cert in-memory
W0915 20:38:59.267308 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0915 20:38:59.267801 1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0915 20:38:59.270845 1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
W0915 20:38:59.271186 1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0915 20:38:59.397213 1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
I0915 20:38:59.397821 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0915 20:38:59.406066 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0915 20:38:59.397889 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
E0915 20:38:59.503593 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E0915 20:38:59.507979 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E0915 20:38:59.508615 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E0915 20:38:59.509061 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E0915 20:38:59.509275 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E0915 20:38:59.512010 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
I0915 20:38:59.611620 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [92765dba236c71d174f12db9fb303c5e8e28954e4784730673a1ba475c61e593] <==
*
*
* ==> kubelet <==
* -- Journal begins at Wed 2021-09-15 20:36:18 UTC, ends at Wed 2021-09-15 20:39:22 UTC. --
Sep 15 20:38:58 pause-20210915203607-209669 kubelet[4598]: E0915 20:38:58.802559 4598 kubelet.go:2407] "Error getting node" err="node \"pause-20210915203607-209669\" not found"
Sep 15 20:38:58 pause-20210915203607-209669 kubelet[4598]: E0915 20:38:58.903413 4598 kubelet.go:2407] "Error getting node" err="node \"pause-20210915203607-209669\" not found"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: E0915 20:38:59.004855 4598 kubelet.go:2407] "Error getting node" err="node \"pause-20210915203607-209669\" not found"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: E0915 20:38:59.105070 4598 kubelet.go:2407] "Error getting node" err="node \"pause-20210915203607-209669\" not found"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: E0915 20:38:59.208031 4598 kubelet.go:2407] "Error getting node" err="node \"pause-20210915203607-209669\" not found"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: E0915 20:38:59.309041 4598 kubelet.go:2407] "Error getting node" err="node \"pause-20210915203607-209669\" not found"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.376006 4598 apiserver.go:52] "Watching apiserver"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.410567 4598 kuberuntime_manager.go:1075] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.414953 4598 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.417339 4598 topology_manager.go:200] "Topology Admit Handler"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.417852 4598 topology_manager.go:200] "Topology Admit Handler"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.537710 4598 kubelet_node_status.go:109] "Node was previously registered" node="pause-20210915203607-209669"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.540044 4598 kubelet_node_status.go:74] "Successfully registered node" node="pause-20210915203607-209669"
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.607697 4598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ad260f2-128d-46b8-9f0c-33929b1c2e24-config-volume\") pod \"coredns-78fcd69978-stp22\" (UID: \"3ad260f2-128d-46b8-9f0c-33929b1c2e24\") "
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.608076 4598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hrwv\" (UniqueName: \"kubernetes.io/projected/cd8c788e-8ca0-46be-be18-92e8ff747405-kube-api-access-7hrwv\") pod \"kube-proxy-knsd4\" (UID: \"cd8c788e-8ca0-46be-be18-92e8ff747405\") "
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.608276 4598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cd8c788e-8ca0-46be-be18-92e8ff747405-kube-proxy\") pod \"kube-proxy-knsd4\" (UID: \"cd8c788e-8ca0-46be-be18-92e8ff747405\") "
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.608425 4598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd8c788e-8ca0-46be-be18-92e8ff747405-xtables-lock\") pod \"kube-proxy-knsd4\" (UID: \"cd8c788e-8ca0-46be-be18-92e8ff747405\") "
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.608678 4598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd8c788e-8ca0-46be-be18-92e8ff747405-lib-modules\") pod \"kube-proxy-knsd4\" (UID: \"cd8c788e-8ca0-46be-be18-92e8ff747405\") "
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.608826 4598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbv97\" (UniqueName: \"kubernetes.io/projected/3ad260f2-128d-46b8-9f0c-33929b1c2e24-kube-api-access-sbv97\") pod \"coredns-78fcd69978-stp22\" (UID: \"3ad260f2-128d-46b8-9f0c-33929b1c2e24\") "
Sep 15 20:38:59 pause-20210915203607-209669 kubelet[4598]: I0915 20:38:59.608957 4598 reconciler.go:157] "Reconciler: start to sync state"
Sep 15 20:39:00 pause-20210915203607-209669 kubelet[4598]: I0915 20:39:00.325089 4598 scope.go:110] "RemoveContainer" containerID="d0a53108ac9c335a7f9f23fe4fc0cacebb805ce2e8e234e45d65384428b0269c"
Sep 15 20:39:03 pause-20210915203607-209669 kubelet[4598]: I0915 20:39:03.821570 4598 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
Sep 15 20:39:13 pause-20210915203607-209669 kubelet[4598]: I0915 20:39:13.885835 4598 topology_manager.go:200] "Topology Admit Handler"
Sep 15 20:39:14 pause-20210915203607-209669 kubelet[4598]: I0915 20:39:14.050059 4598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/42831a1f-205d-4116-b49c-dbc188c15aa2-tmp\") pod \"storage-provisioner\" (UID: \"42831a1f-205d-4116-b49c-dbc188c15aa2\") "
Sep 15 20:39:14 pause-20210915203607-209669 kubelet[4598]: I0915 20:39:14.050401 4598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl7c6\" (UniqueName: \"kubernetes.io/projected/42831a1f-205d-4116-b49c-dbc188c15aa2-kube-api-access-gl7c6\") pod \"storage-provisioner\" (UID: \"42831a1f-205d-4116-b49c-dbc188c15aa2\") "
*
* ==> storage-provisioner [962ec633e8582d72ec61c1683b04913be4c850a3e5327e410d6eb9c97293b4d5] <==
* I0915 20:39:15.079902 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0915 20:39:15.099727 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0915 20:39:15.099977 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0915 20:39:15.124859 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0915 20:39:15.125647 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210915203607-209669_cd11252b-f232-4fcc-9951-7beddc5db04d!
I0915 20:39:15.126945 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"45f02105-eeeb-47da-97e2-fd20e8dca1a2", APIVersion:"v1", ResourceVersion:"588", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210915203607-209669_cd11252b-f232-4fcc-9951-7beddc5db04d became leader
I0915 20:39:15.242575 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210915203607-209669_cd11252b-f232-4fcc-9951-7beddc5db04d!
-- /stdout --
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20210915203607-209669 -n pause-20210915203607-209669
helpers_test.go:262: (dbg) Run: kubectl --context pause-20210915203607-209669 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods:
helpers_test.go:273: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:276: (dbg) Run: kubectl --context pause-20210915203607-209669 describe pod
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context pause-20210915203607-209669 describe pod : exit status 1 (76.854898ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:278: kubectl --context pause-20210915203607-209669 describe pod : exit status 1
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (75.63s)