=== RUN TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Run: out/minikube-linux-amd64 start -p pause-20210816222224-6986 --alsologtostderr -v=1 --driver=kvm2 --container-runtime=containerd
E0816 22:24:31.879151 6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
=== CONT TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210816222224-6986 --alsologtostderr -v=1 --driver=kvm2 --container-runtime=containerd: (45.31906007s)
pause_test.go:97: expected the second start log output to include "The running cluster does not require reconfiguration" but got:
-- stdout --
* [pause-20210816222224-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
- MINIKUBE_LOCATION=12230
* Using the kvm2 driver based on existing profile
* Starting control plane node pause-20210816222224-6986 in cluster pause-20210816222224-6986
* Updating the running kvm2 "pause-20210816222224-6986" VM ...
* Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "pause-20210816222224-6986" cluster and "default" namespace by default
-- /stdout --
** stderr **
I0816 22:24:28.350080 10732 out.go:298] Setting OutFile to fd 1 ...
I0816 22:24:28.350178 10732 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0816 22:24:28.350184 10732 out.go:311] Setting ErrFile to fd 2...
I0816 22:24:28.350188 10732 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0816 22:24:28.350318 10732 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
I0816 22:24:28.350597 10732 out.go:305] Setting JSON to false
I0816 22:24:28.397522 10732 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":4030,"bootTime":1629148638,"procs":186,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
I0816 22:24:28.397643 10732 start.go:121] virtualization: kvm guest
I0816 22:24:28.400445 10732 out.go:177] * [pause-20210816222224-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
I0816 22:24:28.402081 10732 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
I0816 22:24:28.400587 10732 notify.go:169] Checking for updates...
I0816 22:24:28.403507 10732 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0816 22:24:28.405334 10732 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
I0816 22:24:28.406829 10732 out.go:177] - MINIKUBE_LOCATION=12230
I0816 22:24:28.407374 10732 config.go:177] Loaded profile config "pause-20210816222224-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
I0816 22:24:28.408014 10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:24:28.408073 10732 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:24:28.422487 10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33721
I0816 22:24:28.423359 10732 main.go:130] libmachine: () Calling .GetVersion
I0816 22:24:28.423998 10732 main.go:130] libmachine: Using API Version 1
I0816 22:24:28.424017 10732 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:24:28.424398 10732 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:24:28.424556 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
I0816 22:24:28.424720 10732 driver.go:335] Setting default libvirt URI to qemu:///system
I0816 22:24:28.425044 10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:24:28.425081 10732 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:24:28.437710 10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43927
I0816 22:24:28.438223 10732 main.go:130] libmachine: () Calling .GetVersion
I0816 22:24:28.438755 10732 main.go:130] libmachine: Using API Version 1
I0816 22:24:28.438782 10732 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:24:28.439150 10732 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:24:28.439314 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
I0816 22:24:28.472803 10732 out.go:177] * Using the kvm2 driver based on existing profile
I0816 22:24:28.472832 10732 start.go:278] selected driver: kvm2
I0816 22:24:28.472838 10732 start.go:751] validating driver "kvm2" against &{Name:pause-20210816222224-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.21.3 ClusterName:pause-20210816222224-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.226 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0816 22:24:28.472976 10732 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0816 22:24:28.473768 10732 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0816 22:24:28.473947 10732 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0816 22:24:28.485649 10732 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
I0816 22:24:28.486587 10732 cni.go:93] Creating CNI manager for ""
I0816 22:24:28.486610 10732 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
I0816 22:24:28.486620 10732 start_flags.go:277] config:
{Name:pause-20210816222224-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210816222224-6986 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.226 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0816 22:24:28.486751 10732 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0816 22:24:28.488623 10732 out.go:177] * Starting control plane node pause-20210816222224-6986 in cluster pause-20210816222224-6986
I0816 22:24:28.488645 10732 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
I0816 22:24:28.488668 10732 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
I0816 22:24:28.488682 10732 cache.go:56] Caching tarball of preloaded images
I0816 22:24:28.488785 10732 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0816 22:24:28.488809 10732 cache.go:59] Finished verifying existence of preloaded tar for v1.21.3 on containerd
I0816 22:24:28.488949 10732 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/config.json ...
I0816 22:24:28.489143 10732 cache.go:205] Successfully downloaded all kic artifacts
I0816 22:24:28.489168 10732 start.go:313] acquiring machines lock for pause-20210816222224-6986: {Name:mk808edd60d1305a42bb85791729eff4573dbb15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0816 22:24:32.433403 10732 start.go:317] acquired machines lock for "pause-20210816222224-6986" in 3.944203848s
I0816 22:24:32.433444 10732 start.go:93] Skipping create...Using existing machine configuration
I0816 22:24:32.433452 10732 fix.go:55] fixHost starting:
I0816 22:24:32.433902 10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:24:32.433953 10732 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:24:32.448295 10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43587
I0816 22:24:32.448711 10732 main.go:130] libmachine: () Calling .GetVersion
I0816 22:24:32.449167 10732 main.go:130] libmachine: Using API Version 1
I0816 22:24:32.449191 10732 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:24:32.449587 10732 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:24:32.449791 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
I0816 22:24:32.449957 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
I0816 22:24:32.453151 10732 fix.go:108] recreateIfNeeded on pause-20210816222224-6986: state=Running err=<nil>
W0816 22:24:32.453194 10732 fix.go:134] unexpected machine state, will restart: <nil>
I0816 22:24:32.498470 10732 out.go:177] * Updating the running kvm2 "pause-20210816222224-6986" VM ...
I0816 22:24:32.498528 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
I0816 22:24:32.498781 10732 machine.go:88] provisioning docker machine ...
I0816 22:24:32.498811 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
I0816 22:24:32.499018 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetMachineName
I0816 22:24:32.499197 10732 buildroot.go:166] provisioning hostname "pause-20210816222224-6986"
I0816 22:24:32.499216 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetMachineName
I0816 22:24:32.499398 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
I0816 22:24:32.504997 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:24:32.505377 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
I0816 22:24:32.505411 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:24:32.505506 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
I0816 22:24:32.505645 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
I0816 22:24:32.505780 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
I0816 22:24:32.505938 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
I0816 22:24:32.506090 10732 main.go:130] libmachine: Using SSH client type: native
I0816 22:24:32.506237 10732 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil> [] 0s} 192.168.50.226 22 <nil> <nil>}
I0816 22:24:32.506251 10732 main.go:130] libmachine: About to run SSH command:
sudo hostname pause-20210816222224-6986 && echo "pause-20210816222224-6986" | sudo tee /etc/hostname
I0816 22:24:32.654222 10732 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210816222224-6986
I0816 22:24:32.654253 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
I0816 22:24:32.659650 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:24:32.659996 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
I0816 22:24:32.660024 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:24:32.660223 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
I0816 22:24:32.660420 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
I0816 22:24:32.660598 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
I0816 22:24:32.660744 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
I0816 22:24:32.660930 10732 main.go:130] libmachine: Using SSH client type: native
I0816 22:24:32.661075 10732 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil> [] 0s} 192.168.50.226 22 <nil> <nil>}
I0816 22:24:32.661094 10732 main.go:130] libmachine: About to run SSH command:
if ! grep -xq '.*\spause-20210816222224-6986' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210816222224-6986/g' /etc/hosts;
else
echo '127.0.1.1 pause-20210816222224-6986' | sudo tee -a /etc/hosts;
fi
fi
I0816 22:24:32.775636 10732 main.go:130] libmachine: SSH cmd err, output: <nil>:
I0816 22:24:32.775672 10732 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
I0816 22:24:32.775709 10732 buildroot.go:174] setting up certificates
I0816 22:24:32.775724 10732 provision.go:83] configureAuth start
I0816 22:24:32.775739 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetMachineName
I0816 22:24:32.776029 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetIP
I0816 22:24:32.781138 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:24:32.781410 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
I0816 22:24:32.781440 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:24:32.781566 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
I0816 22:24:32.785841 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:24:32.786169 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
I0816 22:24:32.786197 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:24:32.786258 10732 provision.go:138] copyHostCerts
I0816 22:24:32.786331 10732 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
I0816 22:24:32.786340 10732 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
I0816 22:24:32.786389 10732 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
I0816 22:24:32.786478 10732 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
I0816 22:24:32.786489 10732 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
I0816 22:24:32.786511 10732 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
I0816 22:24:32.786585 10732 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
I0816 22:24:32.786596 10732 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
I0816 22:24:32.786615 10732 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
I0816 22:24:32.786689 10732 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.pause-20210816222224-6986 san=[192.168.50.226 192.168.50.226 localhost 127.0.0.1 minikube pause-20210816222224-6986]
I0816 22:24:32.861088 10732 provision.go:172] copyRemoteCerts
I0816 22:24:32.861140 10732 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0816 22:24:32.861160 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
I0816 22:24:32.866155 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:24:32.866454 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
I0816 22:24:32.866484 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:24:32.866640 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
I0816 22:24:32.866811 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
I0816 22:24:32.866962 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
I0816 22:24:32.867131 10732 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
I0816 22:24:32.959873 10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0816 22:24:32.978489 10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
I0816 22:24:32.998817 10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0816 22:24:33.017278 10732 provision.go:86] duration metric: configureAuth took 241.540897ms
I0816 22:24:33.017298 10732 buildroot.go:189] setting minikube options for container-runtime
I0816 22:24:33.017455 10732 config.go:177] Loaded profile config "pause-20210816222224-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
I0816 22:24:33.017470 10732 machine.go:91] provisioned docker machine in 518.670607ms
I0816 22:24:33.017479 10732 start.go:267] post-start starting for "pause-20210816222224-6986" (driver="kvm2")
I0816 22:24:33.017487 10732 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0816 22:24:33.017515 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
I0816 22:24:33.017803 10732 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0816 22:24:33.017835 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
I0816 22:24:33.023174 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:24:33.023519 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
I0816 22:24:33.023540 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:24:33.023716 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
I0816 22:24:33.023865 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
I0816 22:24:33.023996 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
I0816 22:24:33.024116 10732 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
I0816 22:24:33.112122 10732 ssh_runner.go:149] Run: cat /etc/os-release
I0816 22:24:33.117676 10732 info.go:137] Remote host: Buildroot 2020.02.12
I0816 22:24:33.117704 10732 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
I0816 22:24:33.117767 10732 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
I0816 22:24:33.117906 10732 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
I0816 22:24:33.118027 10732 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
I0816 22:24:33.126187 10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
I0816 22:24:33.144173 10732 start.go:270] post-start completed in 126.681216ms
I0816 22:24:33.144218 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
I0816 22:24:33.144463 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
I0816 22:24:33.150202 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:24:33.150521 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
I0816 22:24:33.150577 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:24:33.150673 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
I0816 22:24:33.150828 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
I0816 22:24:33.151004 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
I0816 22:24:33.151164 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
I0816 22:24:33.151325 10732 main.go:130] libmachine: Using SSH client type: native
I0816 22:24:33.151506 10732 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil> [] 0s} 192.168.50.226 22 <nil> <nil>}
I0816 22:24:33.151523 10732 main.go:130] libmachine: About to run SSH command:
date +%s.%N
I0816 22:24:33.272329 10732 main.go:130] libmachine: SSH cmd err, output: <nil>: 1629152673.272261761
I0816 22:24:33.272351 10732 fix.go:212] guest clock: 1629152673.272261761
I0816 22:24:33.272366 10732 fix.go:225] Guest: 2021-08-16 22:24:33.272261761 +0000 UTC Remote: 2021-08-16 22:24:33.144446757 +0000 UTC m=+4.850018396 (delta=127.815004ms)
I0816 22:24:33.272386 10732 fix.go:196] guest clock delta is within tolerance: 127.815004ms
I0816 22:24:33.272393 10732 fix.go:57] fixHost completed within 838.941925ms
I0816 22:24:33.272399 10732 start.go:80] releasing machines lock for "pause-20210816222224-6986", held for 838.968464ms
I0816 22:24:33.272434 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
I0816 22:24:33.272656 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetIP
I0816 22:24:33.277736 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:24:33.278030 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
I0816 22:24:33.278065 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:24:33.278165 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
I0816 22:24:33.278332 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
I0816 22:24:33.278753 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
I0816 22:24:33.278995 10732 ssh_runner.go:149] Run: systemctl --version
I0816 22:24:33.279010 10732 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0816 22:24:33.279024 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
I0816 22:24:33.279040 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
I0816 22:24:33.285116 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:24:33.285541 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
I0816 22:24:33.285593 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:24:33.285633 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
I0816 22:24:33.285788 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
I0816 22:24:33.285924 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
I0816 22:24:33.286058 10732 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
I0816 22:24:33.286534 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:24:33.286892 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
I0816 22:24:33.286927 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:24:33.287075 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
I0816 22:24:33.287218 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
I0816 22:24:33.287346 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
I0816 22:24:33.287452 10732 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
I0816 22:24:33.391442 10732 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
I0816 22:24:33.391556 10732 ssh_runner.go:149] Run: sudo crictl images --output json
I0816 22:24:33.445030 10732 containerd.go:613] all images are preloaded for containerd runtime.
I0816 22:24:33.445056 10732 containerd.go:517] Images already preloaded, skipping extraction
I0816 22:24:33.445118 10732 ssh_runner.go:149] Run: sudo systemctl stop -f crio
I0816 22:24:33.458484 10732 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0816 22:24:33.469602 10732 docker.go:153] disabling docker service ...
I0816 22:24:33.469659 10732 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
I0816 22:24:33.482924 10732 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
I0816 22:24:33.494359 10732 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
I0816 22:24:33.656831 10732 ssh_runner.go:149] Run: sudo systemctl mask docker.service
I0816 22:24:33.841381 10732 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
I0816 22:24:33.852674 10732 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0816 22:24:33.865658 10732 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY29udGFpbmV
yZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kIgogICAgICB
jb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
I0816 22:24:33.879242 10732 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0816 22:24:33.885420 10732 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0816 22:24:33.892178 10732 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0816 22:24:34.088915 10732 ssh_runner.go:149] Run: sudo systemctl restart containerd
I0816 22:24:34.158425 10732 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
I0816 22:24:34.158486 10732 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
I0816 22:24:34.164509 10732 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
I0816 22:24:35.269953 10732 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
I0816 22:24:35.276600 10732 retry.go:31] will retry after 2.160763633s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
I0816 22:24:37.438899 10732 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
I0816 22:24:37.445942 10732 start.go:413] Will wait 60s for crictl version
I0816 22:24:37.446003 10732 ssh_runner.go:149] Run: sudo crictl version
I0816 22:24:37.491525 10732 start.go:422] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.4.9
RuntimeApiVersion: v1alpha2
I0816 22:24:37.491588 10732 ssh_runner.go:149] Run: containerd --version
I0816 22:24:37.542241 10732 ssh_runner.go:149] Run: containerd --version
I0816 22:24:37.734298 10732 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
I0816 22:24:37.734354 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetIP
I0816 22:24:37.741113 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:24:37.741570 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
I0816 22:24:37.741607 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:24:37.741923 10732 ssh_runner.go:149] Run: grep 192.168.50.1 host.minikube.internal$ /etc/hosts
I0816 22:24:37.748812 10732 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
I0816 22:24:37.748875 10732 ssh_runner.go:149] Run: sudo crictl images --output json
I0816 22:24:37.796309 10732 containerd.go:613] all images are preloaded for containerd runtime.
I0816 22:24:37.796331 10732 containerd.go:517] Images already preloaded, skipping extraction
I0816 22:24:37.796387 10732 ssh_runner.go:149] Run: sudo crictl images --output json
I0816 22:24:37.894057 10732 containerd.go:613] all images are preloaded for containerd runtime.
I0816 22:24:37.894088 10732 cache_images.go:74] Images are preloaded, skipping loading
I0816 22:24:37.894144 10732 ssh_runner.go:149] Run: sudo crictl info
I0816 22:24:37.958935 10732 cni.go:93] Creating CNI manager for ""
I0816 22:24:37.958981 10732 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
I0816 22:24:37.958992 10732 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0816 22:24:37.959007 10732 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.226 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210816222224-6986 NodeName:pause-20210816222224-6986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.226"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.50.226 CgroupDriver:cgroupfs ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0816 22:24:37.959160 10732 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.50.226
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "pause-20210816222224-6986"
kubeletExtraArgs:
node-ip: 192.168.50.226
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.50.226"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.21.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0816 22:24:37.959268 10732 kubeadm.go:909] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=pause-20210816222224-6986 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.226 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.21.3 ClusterName:pause-20210816222224-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0816 22:24:37.959328 10732 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
I0816 22:24:37.985980 10732 binaries.go:44] Found k8s binaries, skipping transfer
I0816 22:24:37.986061 10732 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0816 22:24:37.996940 10732 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (541 bytes)
I0816 22:24:38.017786 10732 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0816 22:24:38.048096 10732 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2081 bytes)
I0816 22:24:38.071876 10732 ssh_runner.go:149] Run: grep 192.168.50.226 control-plane.minikube.internal$ /etc/hosts
I0816 22:24:38.079162 10732 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986 for IP: 192.168.50.226
I0816 22:24:38.079218 10732 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
I0816 22:24:38.079238 10732 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
I0816 22:24:38.079308 10732 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/client.key
I0816 22:24:38.079331 10732 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/apiserver.key.5cb43a24
I0816 22:24:38.079351 10732 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/proxy-client.key
I0816 22:24:38.079516 10732 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem (1338 bytes)
W0816 22:24:38.079572 10732 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
I0816 22:24:38.079585 10732 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1675 bytes)
I0816 22:24:38.079623 10732 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
I0816 22:24:38.079673 10732 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
I0816 22:24:38.079714 10732 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1675 bytes)
I0816 22:24:38.079784 10732 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
I0816 22:24:38.081082 10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0816 22:24:38.125511 10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0816 22:24:38.161327 10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0816 22:24:38.200447 10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0816 22:24:38.228844 10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0816 22:24:38.316480 10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0816 22:24:38.353700 10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0816 22:24:38.400715 10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0816 22:24:38.453300 10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
I0816 22:24:38.483284 10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
I0816 22:24:38.518787 10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0816 22:24:38.549501 10732 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0816 22:24:38.577939 10732 ssh_runner.go:149] Run: openssl version
I0816 22:24:38.593224 10732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
I0816 22:24:38.657361 10732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6986.pem
I0816 22:24:38.673607 10732 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:49 /usr/share/ca-certificates/6986.pem
I0816 22:24:38.673678 10732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
I0816 22:24:38.702412 10732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
I0816 22:24:38.726832 10732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
I0816 22:24:38.759604 10732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/69862.pem
I0816 22:24:38.776280 10732 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:49 /usr/share/ca-certificates/69862.pem
I0816 22:24:38.776363 10732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
I0816 22:24:38.809703 10732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
I0816 22:24:38.843479 10732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0816 22:24:38.879490 10732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0816 22:24:38.907339 10732 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:42 /usr/share/ca-certificates/minikubeCA.pem
I0816 22:24:38.907406 10732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0816 22:24:38.932662 10732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0816 22:24:38.972753 10732 kubeadm.go:390] StartCluster: {Name:pause-20210816222224-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 Clus
terName:pause-20210816222224-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.226 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0816 22:24:38.972868 10732 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0816 22:24:38.972930 10732 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0816 22:24:39.209329 10732 cri.go:76] found id: "28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0"
I0816 22:24:39.209356 10732 cri.go:76] found id: "a8503bd796d5d979a6e1b8b5154986e8b77de391b4f091211451ea5f52808e52"
I0816 22:24:39.209363 10732 cri.go:76] found id: "124fa393359f758ea47161b345d2cab4ce486d4473a4caad483449464d44315f"
I0816 22:24:39.209369 10732 cri.go:76] found id: "8710cefecdbe5d31cd44e9ae3378bc08cbc56001326a1cb38026755196cac7d1"
I0816 22:24:39.209376 10732 cri.go:76] found id: "7dbd1fc92c3753c4757164cea54536cf62394c731fffd8fb94124eea6f32138f"
I0816 22:24:39.209383 10732 cri.go:76] found id: "38dc61b214a9cbd019de4ca9ab52fb6baf728336de6d715df22b027522ad8b20"
I0816 22:24:39.209388 10732 cri.go:76] found id: ""
I0816 22:24:39.209439 10732 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I0816 22:24:39.312395 10732 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d","pid":4260,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d/rootfs","created":"2021-08-16T22:24:38.416064875Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb","pid":4355,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b9459ff3a0d8af1ce5
d825b39ad18d0d0623500b3c2d28ac97318b178030cfb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb/rootfs","created":"2021-08-16T22:24:38.822262087Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03","pid":4283,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03/rootfs","created":"2021-08-16T22:24:38.475165162Z","annotations":{"io.kubernetes.cri.container-type":"sa
ndbox","io.kubernetes.cri.sandbox-id":"feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c"},"owner":"root"}]
I0816 22:24:39.312540 10732 cri.go:113] list returned 3 containers
I0816 22:24:39.312557 10732 cri.go:116] container: {ID:1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d Status:running}
I0816 22:24:39.312575 10732 cri.go:118] skipping 1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d - not in ps
I0816 22:24:39.312586 10732 cri.go:116] container: {ID:3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb Status:created}
I0816 22:24:39.312595 10732 cri.go:118] skipping 3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb - not in ps
I0816 22:24:39.312600 10732 cri.go:116] container: {ID:feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03 Status:created}
I0816 22:24:39.312608 10732 cri.go:118] skipping feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03 - not in ps
I0816 22:24:39.312654 10732 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0816 22:24:39.330613 10732 kubeadm.go:401] found existing configuration files, will attempt cluster restart
I0816 22:24:39.330640 10732 kubeadm.go:600] restartCluster start
I0816 22:24:39.330714 10732 ssh_runner.go:149] Run: sudo test -d /data/minikube
I0816 22:24:39.355170 10732 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0816 22:24:39.356620 10732 kubeconfig.go:93] found "pause-20210816222224-6986" server: "https://192.168.50.226:8443"
I0816 22:24:39.357673 10732 kapi.go:59] client config for pause-20210816222224-6986: &rest.Config{Host:"https://192.168.50.226:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0816 22:24:39.360056 10732 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0816 22:24:39.378189 10732 api_server.go:164] Checking apiserver status ...
I0816 22:24:39.378254 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0816 22:24:39.397531 10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0816 22:24:39.597857 10732 api_server.go:164] Checking apiserver status ...
I0816 22:24:39.597937 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0816 22:24:39.612958 10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0816 22:24:39.798158 10732 api_server.go:164] Checking apiserver status ...
I0816 22:24:39.798265 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0816 22:24:39.813430 10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0816 22:24:39.997695 10732 api_server.go:164] Checking apiserver status ...
I0816 22:24:39.997763 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0816 22:24:40.010358 10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0816 22:24:40.198643 10732 api_server.go:164] Checking apiserver status ...
I0816 22:24:40.198727 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0816 22:24:40.214548 10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0816 22:24:40.397717 10732 api_server.go:164] Checking apiserver status ...
I0816 22:24:40.397785 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0816 22:24:40.411349 10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0816 22:24:40.598597 10732 api_server.go:164] Checking apiserver status ...
I0816 22:24:40.598668 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0816 22:24:40.615618 10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0816 22:24:40.797999 10732 api_server.go:164] Checking apiserver status ...
I0816 22:24:40.798110 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0816 22:24:40.810917 10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0816 22:24:40.998241 10732 api_server.go:164] Checking apiserver status ...
I0816 22:24:40.998309 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0816 22:24:41.008263 10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0816 22:24:41.198400 10732 api_server.go:164] Checking apiserver status ...
I0816 22:24:41.198478 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0816 22:24:41.209207 10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0816 22:24:41.398411 10732 api_server.go:164] Checking apiserver status ...
I0816 22:24:41.398495 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0816 22:24:41.409500 10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0816 22:24:41.597769 10732 api_server.go:164] Checking apiserver status ...
I0816 22:24:41.597829 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0816 22:24:41.611018 10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0816 22:24:41.798345 10732 api_server.go:164] Checking apiserver status ...
I0816 22:24:41.798446 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0816 22:24:41.811463 10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0816 22:24:41.997819 10732 api_server.go:164] Checking apiserver status ...
I0816 22:24:41.997895 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0816 22:24:42.012473 10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0816 22:24:42.197709 10732 api_server.go:164] Checking apiserver status ...
I0816 22:24:42.197799 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0816 22:24:42.210959 10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0816 22:24:42.398113 10732 api_server.go:164] Checking apiserver status ...
I0816 22:24:42.398196 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0816 22:24:42.408454 10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0816 22:24:42.408483 10732 api_server.go:164] Checking apiserver status ...
I0816 22:24:42.408544 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0816 22:24:42.417898 10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0816 22:24:42.417958 10732 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
I0816 22:24:42.417969 10732 kubeadm.go:1032] stopping kube-system containers ...
I0816 22:24:42.417982 10732 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0816 22:24:42.418043 10732 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0816 22:24:42.458578 10732 cri.go:76] found id: "28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0"
I0816 22:24:42.458609 10732 cri.go:76] found id: "a8503bd796d5d979a6e1b8b5154986e8b77de391b4f091211451ea5f52808e52"
I0816 22:24:42.458616 10732 cri.go:76] found id: "124fa393359f758ea47161b345d2cab4ce486d4473a4caad483449464d44315f"
I0816 22:24:42.458622 10732 cri.go:76] found id: "8710cefecdbe5d31cd44e9ae3378bc08cbc56001326a1cb38026755196cac7d1"
I0816 22:24:42.458628 10732 cri.go:76] found id: "7dbd1fc92c3753c4757164cea54536cf62394c731fffd8fb94124eea6f32138f"
I0816 22:24:42.458634 10732 cri.go:76] found id: "38dc61b214a9cbd019de4ca9ab52fb6baf728336de6d715df22b027522ad8b20"
I0816 22:24:42.458639 10732 cri.go:76] found id: ""
I0816 22:24:42.458646 10732 cri.go:221] Stopping containers: [28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0 a8503bd796d5d979a6e1b8b5154986e8b77de391b4f091211451ea5f52808e52 124fa393359f758ea47161b345d2cab4ce486d4473a4caad483449464d44315f 8710cefecdbe5d31cd44e9ae3378bc08cbc56001326a1cb38026755196cac7d1 7dbd1fc92c3753c4757164cea54536cf62394c731fffd8fb94124eea6f32138f 38dc61b214a9cbd019de4ca9ab52fb6baf728336de6d715df22b027522ad8b20]
I0816 22:24:42.458716 10732 ssh_runner.go:149] Run: which crictl
I0816 22:24:42.464088 10732 ssh_runner.go:149] Run: sudo /bin/crictl stop 28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0 a8503bd796d5d979a6e1b8b5154986e8b77de391b4f091211451ea5f52808e52 124fa393359f758ea47161b345d2cab4ce486d4473a4caad483449464d44315f 8710cefecdbe5d31cd44e9ae3378bc08cbc56001326a1cb38026755196cac7d1 7dbd1fc92c3753c4757164cea54536cf62394c731fffd8fb94124eea6f32138f 38dc61b214a9cbd019de4ca9ab52fb6baf728336de6d715df22b027522ad8b20
I0816 22:24:42.528840 10732 ssh_runner.go:149] Run: sudo systemctl stop kubelet
I0816 22:24:42.578695 10732 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0816 22:24:42.590731 10732 kubeadm.go:154] found existing configuration files:
-rw------- 1 root root 5643 Aug 16 22:23 /etc/kubernetes/admin.conf
-rw------- 1 root root 5658 Aug 16 22:23 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2039 Aug 16 22:23 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5602 Aug 16 22:23 /etc/kubernetes/scheduler.conf
I0816 22:24:42.590803 10732 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0816 22:24:42.598201 10732 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0816 22:24:42.605364 10732 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0816 22:24:42.611530 10732 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0816 22:24:42.611583 10732 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0816 22:24:42.618324 10732 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0816 22:24:42.626604 10732 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0816 22:24:42.626656 10732 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0816 22:24:42.633541 10732 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0816 22:24:42.642260 10732 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0816 22:24:42.642284 10732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0816 22:24:42.836959 10732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0816 22:24:43.629478 10732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0816 22:24:46.327994 10732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0816 22:24:46.444335 10732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0816 22:24:46.599493 10732 api_server.go:50] waiting for apiserver process to appear ...
I0816 22:24:46.599562 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0816 22:24:47.112726 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0816 22:24:47.612988 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0816 22:24:48.112162 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0816 22:24:48.612444 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0816 22:24:49.757387 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0816 22:24:50.113160 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0816 22:24:50.612450 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0816 22:24:51.112276 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0816 22:24:51.613162 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0816 22:24:52.112539 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0816 22:24:52.612410 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0816 22:24:53.112758 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0816 22:24:53.612575 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0816 22:24:53.632520 10732 api_server.go:70] duration metric: took 7.033030474s to wait for apiserver process to appear ...
I0816 22:24:53.632561 10732 api_server.go:86] waiting for apiserver healthz status ...
I0816 22:24:53.632570 10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
I0816 22:24:53.633109 10732 api_server.go:255] stopped: https://192.168.50.226:8443/healthz: Get "https://192.168.50.226:8443/healthz": dial tcp 192.168.50.226:8443: connect: connection refused
I0816 22:24:54.133848 10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
I0816 22:24:59.090396 10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0816 22:24:59.090431 10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0816 22:24:59.133677 10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
I0816 22:24:59.161347 10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0816 22:24:59.161378 10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0816 22:24:59.633911 10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
I0816 22:24:59.639524 10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0816 22:24:59.639548 10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0816 22:25:00.133775 10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
I0816 22:25:00.151749 10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0816 22:25:00.151784 10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0816 22:25:00.633968 10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
I0816 22:25:00.646578 10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 200:
ok
I0816 22:25:00.661937 10732 api_server.go:139] control plane version: v1.21.3
I0816 22:25:00.661961 10732 api_server.go:129] duration metric: took 7.029396002s to wait for apiserver health ...
I0816 22:25:00.661972 10732 cni.go:93] Creating CNI manager for ""
I0816 22:25:00.661979 10732 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
I0816 22:25:00.663954 10732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0816 22:25:00.664005 10732 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
I0816 22:25:00.674379 10732 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0816 22:25:00.699896 10732 system_pods.go:43] waiting for kube-system pods to appear ...
I0816 22:25:00.718704 10732 system_pods.go:59] 6 kube-system pods found
I0816 22:25:00.718763 10732 system_pods.go:61] "coredns-558bd4d5db-gkxhz" [5aa76749-775e-423d-bbf9-680a20a27051] Running
I0816 22:25:00.718780 10732 system_pods.go:61] "etcd-pause-20210816222224-6986" [f621b99e-0604-4bed-8c4e-4f5741e52f7b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0816 22:25:00.718802 10732 system_pods.go:61] "kube-apiserver-pause-20210816222224-6986" [b1c46709-4b0b-4c9c-a701-d595a58214ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0816 22:25:00.718811 10732 system_pods.go:61] "kube-controller-manager-pause-20210816222224-6986" [777c035e-5f34-469a-afb5-4f8ef90ccbfb] Running
I0816 22:25:00.718819 10732 system_pods.go:61] "kube-proxy-7l59t" [3c0e0899-31c1-477a-a6d4-2844091deea2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0816 22:25:00.718830 10732 system_pods.go:61] "kube-scheduler-pause-20210816222224-6986" [6b32acf9-8108-45a6-901e-70cd125190f8] Running
I0816 22:25:00.718838 10732 system_pods.go:74] duration metric: took 18.921493ms to wait for pod list to return data ...
I0816 22:25:00.718847 10732 node_conditions.go:102] verifying NodePressure condition ...
I0816 22:25:00.723789 10732 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0816 22:25:00.723820 10732 node_conditions.go:123] node cpu capacity is 2
I0816 22:25:00.723836 10732 node_conditions.go:105] duration metric: took 4.978152ms to run NodePressure ...
I0816 22:25:00.723854 10732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0816 22:25:01.396623 10732 kubeadm.go:731] waiting for restarted kubelet to initialise ...
I0816 22:25:01.403109 10732 kubeadm.go:746] kubelet initialised
I0816 22:25:01.403139 10732 kubeadm.go:747] duration metric: took 6.492031ms waiting for restarted kubelet to initialise ...
I0816 22:25:01.403151 10732 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0816 22:25:01.409386 10732 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
I0816 22:25:03.432924 10732 pod_ready.go:102] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"False"
I0816 22:25:05.435685 10732 pod_ready.go:102] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"False"
I0816 22:25:05.951433 10732 pod_ready.go:92] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:05.951457 10732 pod_ready.go:81] duration metric: took 4.542029801s waiting for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
I0816 22:25:05.951470 10732 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:06.969870 10732 pod_ready.go:92] pod "etcd-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:06.969903 10732 pod_ready.go:81] duration metric: took 1.018424787s waiting for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:06.969918 10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:06.978963 10732 pod_ready.go:92] pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:06.978984 10732 pod_ready.go:81] duration metric: took 9.058114ms waiting for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:06.978997 10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:09.000201 10732 pod_ready.go:102] pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"False"
I0816 22:25:10.499577 10732 pod_ready.go:92] pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:10.499613 10732 pod_ready.go:81] duration metric: took 3.520603411s waiting for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.499631 10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.508715 10732 pod_ready.go:92] pod "kube-proxy-7l59t" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:10.508738 10732 pod_ready.go:81] duration metric: took 9.098529ms waiting for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.508749 10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.514516 10732 pod_ready.go:92] pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:10.514536 10732 pod_ready.go:81] duration metric: took 5.779042ms waiting for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.514546 10732 pod_ready.go:38] duration metric: took 9.111379533s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0816 22:25:10.514567 10732 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0816 22:25:10.530219 10732 ops.go:34] apiserver oom_adj: -16
I0816 22:25:10.530242 10732 kubeadm.go:604] restartCluster took 31.19958524s
I0816 22:25:10.530251 10732 kubeadm.go:392] StartCluster complete in 31.557512009s
I0816 22:25:10.530271 10732 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0816 22:25:10.530404 10732 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
I0816 22:25:10.531238 10732 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0816 22:25:10.532000 10732 kapi.go:59] client config for pause-20210816222224-6986: &rest.Config{Host:"https://192.168.50.226:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0816 22:25:10.647656 10732 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210816222224-6986" rescaled to 1
I0816 22:25:10.647728 10732 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.50.226 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
I0816 22:25:10.647757 10732 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0816 22:25:10.647794 10732 addons.go:342] enableAddons start: toEnable=map[], additional=[]
I0816 22:25:10.649327 10732 out.go:177] * Verifying Kubernetes components...
I0816 22:25:10.649398 10732 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0816 22:25:10.647852 10732 addons.go:59] Setting storage-provisioner=true in profile "pause-20210816222224-6986"
I0816 22:25:10.647862 10732 addons.go:59] Setting default-storageclass=true in profile "pause-20210816222224-6986"
I0816 22:25:10.647991 10732 config.go:177] Loaded profile config "pause-20210816222224-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
I0816 22:25:10.649480 10732 addons.go:135] Setting addon storage-provisioner=true in "pause-20210816222224-6986"
W0816 22:25:10.649500 10732 addons.go:147] addon storage-provisioner should already be in state true
I0816 22:25:10.649516 10732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210816222224-6986"
I0816 22:25:10.649532 10732 host.go:66] Checking if "pause-20210816222224-6986" exists ...
I0816 22:25:10.650748 10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:25:10.650827 10732 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:25:10.653189 10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:25:10.653249 10732 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:25:10.664888 10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45461
I0816 22:25:10.665365 10732 main.go:130] libmachine: () Calling .GetVersion
I0816 22:25:10.665893 10732 main.go:130] libmachine: Using API Version 1
I0816 22:25:10.665915 10732 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:25:10.666315 10732 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:25:10.666493 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
I0816 22:25:10.667827 10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34733
I0816 22:25:10.668293 10732 main.go:130] libmachine: () Calling .GetVersion
I0816 22:25:10.668762 10732 main.go:130] libmachine: Using API Version 1
I0816 22:25:10.668782 10732 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:25:10.669202 10732 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:25:10.669761 10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:25:10.669802 10732 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:25:10.670861 10732 kapi.go:59] client config for pause-20210816222224-6986: &rest.Config{Host:"https://192.168.50.226:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0816 22:25:10.676486 10732 addons.go:135] Setting addon default-storageclass=true in "pause-20210816222224-6986"
W0816 22:25:10.676510 10732 addons.go:147] addon default-storageclass should already be in state true
I0816 22:25:10.676539 10732 host.go:66] Checking if "pause-20210816222224-6986" exists ...
I0816 22:25:10.676985 10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:25:10.677031 10732 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:25:10.682317 10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39313
I0816 22:25:10.682805 10732 main.go:130] libmachine: () Calling .GetVersion
I0816 22:25:10.683360 10732 main.go:130] libmachine: Using API Version 1
I0816 22:25:10.683382 10732 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:25:10.683737 10732 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:25:10.683924 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
I0816 22:25:10.687519 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
I0816 22:25:10.693597 10732 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0816 22:25:10.693708 10732 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0816 22:25:10.693722 10732 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0816 22:25:10.693742 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
I0816 22:25:10.692712 10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45043
I0816 22:25:10.694563 10732 main.go:130] libmachine: () Calling .GetVersion
I0816 22:25:10.695082 10732 main.go:130] libmachine: Using API Version 1
I0816 22:25:10.695103 10732 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:25:10.695455 10732 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:25:10.696063 10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:25:10.696115 10732 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:25:10.700367 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:25:10.700792 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
I0816 22:25:10.700813 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:25:10.701111 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
I0816 22:25:10.701350 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
I0816 22:25:10.701537 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
I0816 22:25:10.701730 10732 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
I0816 22:25:10.709887 10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33339
I0816 22:25:10.710304 10732 main.go:130] libmachine: () Calling .GetVersion
I0816 22:25:10.710912 10732 main.go:130] libmachine: Using API Version 1
I0816 22:25:10.710938 10732 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:25:10.711336 10732 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:25:10.711547 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
I0816 22:25:10.714430 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
I0816 22:25:10.714683 10732 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
I0816 22:25:10.714702 10732 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0816 22:25:10.714720 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
I0816 22:25:10.720808 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:25:10.721319 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
I0816 22:25:10.721342 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:25:10.721485 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
I0816 22:25:10.721643 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
I0816 22:25:10.721769 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
I0816 22:25:10.721919 10732 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
I0816 22:25:10.832212 10732 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0816 22:25:10.862755 10732 node_ready.go:35] waiting up to 6m0s for node "pause-20210816222224-6986" to be "Ready" ...
I0816 22:25:10.863120 10732 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0816 22:25:10.867110 10732 node_ready.go:49] node "pause-20210816222224-6986" has status "Ready":"True"
I0816 22:25:10.867130 10732 node_ready.go:38] duration metric: took 4.344058ms waiting for node "pause-20210816222224-6986" to be "Ready" ...
I0816 22:25:10.867143 10732 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0816 22:25:10.883113 10732 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.892065 10732 pod_ready.go:92] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:10.892084 10732 pod_ready.go:81] duration metric: took 8.944517ms waiting for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.892096 10732 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.895462 10732 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0816 22:25:11.127716 10732 pod_ready.go:92] pod "etcd-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:11.127749 10732 pod_ready.go:81] duration metric: took 235.644563ms waiting for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:11.127765 10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:11.536655 10732 pod_ready.go:92] pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:11.536676 10732 pod_ready.go:81] duration metric: took 408.901449ms waiting for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:11.536690 10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:11.539596 10732 main.go:130] libmachine: Making call to close driver server
I0816 22:25:11.539618 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
I0816 22:25:11.539697 10732 main.go:130] libmachine: Making call to close driver server
I0816 22:25:11.539725 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
I0816 22:25:11.540009 10732 main.go:130] libmachine: Successfully made call to close driver server
I0816 22:25:11.540024 10732 main.go:130] libmachine: Making call to close connection to plugin binary
I0816 22:25:11.540041 10732 main.go:130] libmachine: Making call to close driver server
I0816 22:25:11.540041 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | Closing plugin on server side
I0816 22:25:11.540051 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
I0816 22:25:11.540067 10732 main.go:130] libmachine: Successfully made call to close driver server
I0816 22:25:11.540075 10732 main.go:130] libmachine: Making call to close connection to plugin binary
I0816 22:25:11.540083 10732 main.go:130] libmachine: Making call to close driver server
I0816 22:25:11.540092 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
I0816 22:25:11.540126 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | Closing plugin on server side
I0816 22:25:11.540298 10732 main.go:130] libmachine: Successfully made call to close driver server
I0816 22:25:11.540310 10732 main.go:130] libmachine: Making call to close connection to plugin binary
I0816 22:25:11.540320 10732 main.go:130] libmachine: Making call to close driver server
I0816 22:25:11.540329 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
I0816 22:25:11.540417 10732 main.go:130] libmachine: Successfully made call to close driver server
I0816 22:25:11.540429 10732 main.go:130] libmachine: Making call to close connection to plugin binary
I0816 22:25:11.540490 10732 main.go:130] libmachine: Successfully made call to close driver server
I0816 22:25:11.540502 10732 main.go:130] libmachine: Making call to close connection to plugin binary
I0816 22:25:11.542638 10732 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0816 22:25:11.542662 10732 addons.go:344] enableAddons completed in 894.875902ms
I0816 22:25:11.931820 10732 pod_ready.go:92] pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:11.931845 10732 pod_ready.go:81] duration metric: took 395.147421ms waiting for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:11.931860 10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
I0816 22:25:12.329464 10732 pod_ready.go:92] pod "kube-proxy-7l59t" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:12.329493 10732 pod_ready.go:81] duration metric: took 397.623774ms waiting for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
I0816 22:25:12.329507 10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:12.734335 10732 pod_ready.go:92] pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:12.734360 10732 pod_ready.go:81] duration metric: took 404.844565ms waiting for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:12.734374 10732 pod_ready.go:38] duration metric: took 1.867218741s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0816 22:25:12.734394 10732 api_server.go:50] waiting for apiserver process to appear ...
I0816 22:25:12.734439 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0816 22:25:12.754510 10732 api_server.go:70] duration metric: took 2.106745047s to wait for apiserver process to appear ...
I0816 22:25:12.754540 10732 api_server.go:86] waiting for apiserver healthz status ...
I0816 22:25:12.754553 10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
I0816 22:25:12.792067 10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 200:
ok
I0816 22:25:12.794542 10732 api_server.go:139] control plane version: v1.21.3
I0816 22:25:12.794565 10732 api_server.go:129] duration metric: took 40.01886ms to wait for apiserver health ...
I0816 22:25:12.794577 10732 system_pods.go:43] waiting for kube-system pods to appear ...
I0816 22:25:12.941013 10732 system_pods.go:59] 7 kube-system pods found
I0816 22:25:12.941048 10732 system_pods.go:61] "coredns-558bd4d5db-gkxhz" [5aa76749-775e-423d-bbf9-680a20a27051] Running
I0816 22:25:12.941053 10732 system_pods.go:61] "etcd-pause-20210816222224-6986" [f621b99e-0604-4bed-8c4e-4f5741e52f7b] Running
I0816 22:25:12.941057 10732 system_pods.go:61] "kube-apiserver-pause-20210816222224-6986" [b1c46709-4b0b-4c9c-a701-d595a58214ba] Running
I0816 22:25:12.941102 10732 system_pods.go:61] "kube-controller-manager-pause-20210816222224-6986" [777c035e-5f34-469a-afb5-4f8ef90ccbfb] Running
I0816 22:25:12.941116 10732 system_pods.go:61] "kube-proxy-7l59t" [3c0e0899-31c1-477a-a6d4-2844091deea2] Running
I0816 22:25:12.941122 10732 system_pods.go:61] "kube-scheduler-pause-20210816222224-6986" [6b32acf9-8108-45a6-901e-70cd125190f8] Running
I0816 22:25:12.941136 10732 system_pods.go:61] "storage-provisioner" [4f138dc7-da0e-4775-b4de-b0f7d616b212] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0816 22:25:12.941158 10732 system_pods.go:74] duration metric: took 146.575596ms to wait for pod list to return data ...
I0816 22:25:12.941176 10732 default_sa.go:34] waiting for default service account to be created ...
I0816 22:25:13.132349 10732 default_sa.go:45] found service account: "default"
I0816 22:25:13.132381 10732 default_sa.go:55] duration metric: took 191.195172ms for default service account to be created ...
I0816 22:25:13.132394 10732 system_pods.go:116] waiting for k8s-apps to be running ...
I0816 22:25:13.340094 10732 system_pods.go:86] 7 kube-system pods found
I0816 22:25:13.340135 10732 system_pods.go:89] "coredns-558bd4d5db-gkxhz" [5aa76749-775e-423d-bbf9-680a20a27051] Running
I0816 22:25:13.340146 10732 system_pods.go:89] "etcd-pause-20210816222224-6986" [f621b99e-0604-4bed-8c4e-4f5741e52f7b] Running
I0816 22:25:13.340155 10732 system_pods.go:89] "kube-apiserver-pause-20210816222224-6986" [b1c46709-4b0b-4c9c-a701-d595a58214ba] Running
I0816 22:25:13.340163 10732 system_pods.go:89] "kube-controller-manager-pause-20210816222224-6986" [777c035e-5f34-469a-afb5-4f8ef90ccbfb] Running
I0816 22:25:13.340172 10732 system_pods.go:89] "kube-proxy-7l59t" [3c0e0899-31c1-477a-a6d4-2844091deea2] Running
I0816 22:25:13.340184 10732 system_pods.go:89] "kube-scheduler-pause-20210816222224-6986" [6b32acf9-8108-45a6-901e-70cd125190f8] Running
I0816 22:25:13.340196 10732 system_pods.go:89] "storage-provisioner" [4f138dc7-da0e-4775-b4de-b0f7d616b212] Running
I0816 22:25:13.340210 10732 system_pods.go:126] duration metric: took 207.809217ms to wait for k8s-apps to be running ...
I0816 22:25:13.340225 10732 system_svc.go:44] waiting for kubelet service to be running ....
I0816 22:25:13.340279 10732 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0816 22:25:13.358716 10732 system_svc.go:56] duration metric: took 18.47804ms WaitForService to wait for kubelet.
I0816 22:25:13.358752 10732 kubeadm.go:547] duration metric: took 2.710991068s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0816 22:25:13.358785 10732 node_conditions.go:102] verifying NodePressure condition ...
I0816 22:25:13.536797 10732 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0816 22:25:13.536830 10732 node_conditions.go:123] node cpu capacity is 2
I0816 22:25:13.536848 10732 node_conditions.go:105] duration metric: took 178.056493ms to run NodePressure ...
I0816 22:25:13.536863 10732 start.go:231] waiting for startup goroutines ...
I0816 22:25:13.602415 10732 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
I0816 22:25:13.604425 10732 out.go:177] * Done! kubectl is now configured to use "pause-20210816222224-6986" cluster and "default" namespace by default
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816222224-6986 -n pause-20210816222224-6986
helpers_test.go:245: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p pause-20210816222224-6986 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210816222224-6986 logs -n 25: exit status 110 (2.653220078s)
-- stdout --
*
* ==> Audit <==
* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
| -p | multinode-20210816215441-6986 | multinode-20210816215441-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:08:07 UTC | Mon, 16 Aug 2021 22:11:11 UTC |
| | stop | | | | | |
| start | -p | multinode-20210816215441-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:11:11 UTC | Mon, 16 Aug 2021 22:15:19 UTC |
| | multinode-20210816215441-6986 | | | | | |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p | multinode-20210816215441-6986-m03 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:19 UTC | Mon, 16 Aug 2021 22:16:20 UTC |
| | multinode-20210816215441-6986-m03 | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | multinode-20210816215441-6986-m03 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:16:20 UTC | Mon, 16 Aug 2021 22:16:21 UTC |
| | multinode-20210816215441-6986-m03 | | | | | |
| delete | -p | multinode-20210816215441-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:16:21 UTC | Mon, 16 Aug 2021 22:16:23 UTC |
| | multinode-20210816215441-6986 | | | | | |
| start | -p | test-preload-20210816221807-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:18:07 UTC | Mon, 16 Aug 2021 22:19:45 UTC |
| | test-preload-20210816221807-6986 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.17.0 | | | | | |
| ssh | -p | test-preload-20210816221807-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:45 UTC | Mon, 16 Aug 2021 22:19:47 UTC |
| | test-preload-20210816221807-6986 | | | | | |
| | -- sudo crictl pull busybox | | | | | |
| start | -p | test-preload-20210816221807-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:48 UTC | Mon, 16 Aug 2021 22:20:39 UTC |
| | test-preload-20210816221807-6986 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --wait=true --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.17.3 | | | | | |
| ssh | -p | test-preload-20210816221807-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:39 UTC | Mon, 16 Aug 2021 22:20:39 UTC |
| | test-preload-20210816221807-6986 | | | | | |
| | -- sudo crictl image ls | | | | | |
| delete | -p | test-preload-20210816221807-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:39 UTC | Mon, 16 Aug 2021 22:20:40 UTC |
| | test-preload-20210816221807-6986 | | | | | |
| start | -p | scheduled-stop-20210816222040-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:40 UTC | Mon, 16 Aug 2021 22:21:45 UTC |
| | scheduled-stop-20210816222040-6986 | | | | | |
| | --memory=2048 --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| stop | -p | scheduled-stop-20210816222040-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:46 UTC | Mon, 16 Aug 2021 22:21:46 UTC |
| | scheduled-stop-20210816222040-6986 | | | | | |
| | --cancel-scheduled | | | | | |
| stop | -p | scheduled-stop-20210816222040-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:58 UTC | Mon, 16 Aug 2021 22:22:05 UTC |
| | scheduled-stop-20210816222040-6986 | | | | | |
| | --schedule 5s | | | | | |
| delete | -p | scheduled-stop-20210816222040-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:23 UTC | Mon, 16 Aug 2021 22:22:24 UTC |
| | scheduled-stop-20210816222040-6986 | | | | | |
| delete | -p kubenet-20210816222224-6986 | kubenet-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:22:25 UTC |
| delete | -p false-20210816222225-6986 | false-20210816222225-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:25 UTC | Mon, 16 Aug 2021 22:22:25 UTC |
| start | -p | force-systemd-env-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:24:04 UTC |
| | force-systemd-env-20210816222224-6986 | | | | | |
| | --memory=2048 --alsologtostderr | | | | | |
| | -v=5 --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| -p | force-systemd-env-20210816222224-6986 | force-systemd-env-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:04 UTC | Mon, 16 Aug 2021 22:24:04 UTC |
| | ssh cat /etc/containerd/config.toml | | | | | |
| delete | -p | force-systemd-env-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:04 UTC | Mon, 16 Aug 2021 22:24:05 UTC |
| | force-systemd-env-20210816222224-6986 | | | | | |
| start | -p pause-20210816222224-6986 | pause-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:24:28 UTC |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=all --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p | kubernetes-upgrade-20210816222225-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:25 UTC | Mon, 16 Aug 2021 22:24:48 UTC |
| | kubernetes-upgrade-20210816222225-6986 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.14.0 | | | | | |
| | --alsologtostderr -v=1 --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| stop | -p | kubernetes-upgrade-20210816222225-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:49 UTC | Mon, 16 Aug 2021 22:24:53 UTC |
| | kubernetes-upgrade-20210816222225-6986 | | | | | |
| start | -p | offline-containerd-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:25:02 UTC |
| | offline-containerd-20210816222224-6986 | | | | | |
| | --alsologtostderr -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | offline-containerd-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:02 UTC | Mon, 16 Aug 2021 22:25:03 UTC |
| | offline-containerd-20210816222224-6986 | | | | | |
| start | -p pause-20210816222224-6986 | pause-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:28 UTC | Mon, 16 Aug 2021 22:25:13 UTC |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
*
* ==> Last Start <==
* Log file created at: 2021/08/16 22:24:54
Running on machine: debian-jenkins-agent-3
Binary: Built with gc go1.16.7 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0816 22:24:54.079177 10879 out.go:298] Setting OutFile to fd 1 ...
I0816 22:24:54.079273 10879 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0816 22:24:54.079278 10879 out.go:311] Setting ErrFile to fd 2...
I0816 22:24:54.079280 10879 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0816 22:24:54.079426 10879 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
I0816 22:24:54.079721 10879 out.go:305] Setting JSON to false
I0816 22:24:54.187099 10879 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":4056,"bootTime":1629148638,"procs":185,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
I0816 22:24:54.187527 10879 start.go:121] virtualization: kvm guest
I0816 22:24:54.190315 10879 out.go:177] * [kubernetes-upgrade-20210816222225-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
I0816 22:24:54.192235 10879 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
I0816 22:24:54.190469 10879 notify.go:169] Checking for updates...
I0816 22:24:54.193922 10879 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0816 22:24:54.195578 10879 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
I0816 22:24:54.197163 10879 out.go:177] - MINIKUBE_LOCATION=12230
I0816 22:24:54.197582 10879 config.go:177] Loaded profile config "kubernetes-upgrade-20210816222225-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
I0816 22:24:54.197998 10879 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:24:54.198058 10879 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:24:54.215228 10879 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45677
I0816 22:24:54.215770 10879 main.go:130] libmachine: () Calling .GetVersion
I0816 22:24:54.216328 10879 main.go:130] libmachine: Using API Version 1
I0816 22:24:54.216350 10879 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:24:54.216734 10879 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:24:54.216908 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
I0816 22:24:54.217075 10879 driver.go:335] Setting default libvirt URI to qemu:///system
I0816 22:24:54.217475 10879 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:24:54.217512 10879 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:24:54.229224 10879 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34399
I0816 22:24:54.229593 10879 main.go:130] libmachine: () Calling .GetVersion
I0816 22:24:54.230067 10879 main.go:130] libmachine: Using API Version 1
I0816 22:24:54.230093 10879 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:24:54.230460 10879 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:24:54.230643 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
I0816 22:24:54.279869 10879 out.go:177] * Using the kvm2 driver based on existing profile
I0816 22:24:54.279899 10879 start.go:278] selected driver: kvm2
I0816 22:24:54.279906 10879 start.go:751] validating driver "kvm2" against &{Name:kubernetes-upgrade-20210816222225-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210816222225-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.116.91 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0816 22:24:54.280014 10879 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0816 22:24:54.281335 10879 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0816 22:24:54.282098 10879 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0816 22:24:54.294712 10879 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
I0816 22:24:54.295176 10879 cni.go:93] Creating CNI manager for ""
I0816 22:24:54.295202 10879 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
I0816 22:24:54.295212 10879 start_flags.go:277] config:
{Name:kubernetes-upgrade-20210816222225-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210816222225-
6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.116.91 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0816 22:24:54.295364 10879 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0816 22:24:54.297417 10879 out.go:177] * Starting control plane node kubernetes-upgrade-20210816222225-6986 in cluster kubernetes-upgrade-20210816222225-6986
I0816 22:24:54.297445 10879 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
I0816 22:24:54.297484 10879 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
I0816 22:24:54.297505 10879 cache.go:56] Caching tarball of preloaded images
I0816 22:24:54.297634 10879 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0816 22:24:54.297656 10879 cache.go:59] Finished verifying existence of preloaded tar for v1.22.0-rc.0 on containerd
I0816 22:24:54.297784 10879 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/config.json ...
I0816 22:24:54.297977 10879 cache.go:205] Successfully downloaded all kic artifacts
I0816 22:24:54.298007 10879 start.go:313] acquiring machines lock for kubernetes-upgrade-20210816222225-6986: {Name:mk808edd60d1305a42bb85791729eff4573dbb15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0816 22:24:54.298081 10879 start.go:317] acquired machines lock for "kubernetes-upgrade-20210816222225-6986" in 55.05µs
I0816 22:24:54.298103 10879 start.go:93] Skipping create...Using existing machine configuration
I0816 22:24:54.298109 10879 fix.go:55] fixHost starting:
I0816 22:24:54.298510 10879 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:24:54.298561 10879 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:24:54.309226 10879 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33255
I0816 22:24:54.309690 10879 main.go:130] libmachine: () Calling .GetVersion
I0816 22:24:54.310211 10879 main.go:130] libmachine: Using API Version 1
I0816 22:24:54.310242 10879 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:24:54.310587 10879 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:24:54.310840 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
I0816 22:24:54.310996 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetState
I0816 22:24:54.314433 10879 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20210816222225-6986: state=Stopped err=<nil>
I0816 22:24:54.314482 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
W0816 22:24:54.314626 10879 fix.go:134] unexpected machine state, will restart: <nil>
I0816 22:24:52.760695 9171 pod_ready.go:102] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"False"
I0816 22:24:53.612575 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0816 22:24:53.632520 10732 api_server.go:70] duration metric: took 7.033030474s to wait for apiserver process to appear ...
I0816 22:24:53.632561 10732 api_server.go:86] waiting for apiserver healthz status ...
I0816 22:24:53.632570 10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
I0816 22:24:53.633109 10732 api_server.go:255] stopped: https://192.168.50.226:8443/healthz: Get "https://192.168.50.226:8443/healthz": dial tcp 192.168.50.226:8443: connect: connection refused
I0816 22:24:54.133848 10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
I0816 22:24:54.316518 10879 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-20210816222225-6986" ...
I0816 22:24:54.316550 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .Start
I0816 22:24:54.316716 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Ensuring networks are active...
I0816 22:24:54.318718 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Ensuring network default is active
I0816 22:24:54.319156 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Ensuring network mk-kubernetes-upgrade-20210816222225-6986 is active
I0816 22:24:54.319641 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Getting domain xml...
I0816 22:24:54.321602 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Creating domain...
I0816 22:24:54.783576 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Waiting to get IP...
I0816 22:24:54.784705 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:24:54.785273 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has current primary IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:24:54.785327 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Found IP for machine: 192.168.116.91
I0816 22:24:54.785348 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Reserving static IP address...
I0816 22:24:54.785810 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-20210816222225-6986", mac: "52:54:00:92:67:21", ip: "192.168.116.91"} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:23:40 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:24:54.785842 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Reserved static IP address: 192.168.116.91
I0816 22:24:54.785867 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | skip adding static IP to network mk-kubernetes-upgrade-20210816222225-6986 - found existing host DHCP lease matching {name: "kubernetes-upgrade-20210816222225-6986", mac: "52:54:00:92:67:21", ip: "192.168.116.91"}
I0816 22:24:54.785897 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | Getting to WaitForSSH function...
I0816 22:24:54.785911 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Waiting for SSH to be available...
I0816 22:24:54.791673 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:24:54.792070 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:23:40 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:24:54.792097 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:24:54.792320 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | Using SSH client type: external
I0816 22:24:54.792359 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa (-rw-------)
I0816 22:24:54.792401 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.116.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
I0816 22:24:54.792424 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | About to run SSH command:
I0816 22:24:54.792441 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | exit 0
I0816 22:24:55.186584 9171 pod_ready.go:102] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"False"
I0816 22:24:57.682612 9171 pod_ready.go:102] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"False"
I0816 22:24:59.683949 9171 pod_ready.go:102] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"False"
I0816 22:24:59.090396 10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0816 22:24:59.090431 10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0816 22:24:59.133677 10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
I0816 22:24:59.161347 10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0816 22:24:59.161378 10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0816 22:24:59.633911 10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
I0816 22:24:59.639524 10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0816 22:24:59.639548 10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0816 22:25:00.133775 10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
I0816 22:25:00.151749 10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0816 22:25:00.151784 10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0816 22:25:00.633968 10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
I0816 22:25:00.646578 10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 200:
ok
I0816 22:25:00.661937 10732 api_server.go:139] control plane version: v1.21.3
I0816 22:25:00.661961 10732 api_server.go:129] duration metric: took 7.029396002s to wait for apiserver health ...
I0816 22:25:00.661972 10732 cni.go:93] Creating CNI manager for ""
I0816 22:25:00.661979 10732 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
I0816 22:25:01.185512 9171 pod_ready.go:92] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:01.185545 9171 pod_ready.go:81] duration metric: took 23.534022707s waiting for pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace to be "Ready" ...
I0816 22:25:01.185559 9171 pod_ready.go:78] waiting up to 6m0s for pod "etcd-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:01.215463 9171 pod_ready.go:92] pod "etcd-offline-containerd-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:01.215489 9171 pod_ready.go:81] duration metric: took 29.921986ms waiting for pod "etcd-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:01.215503 9171 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:01.230267 9171 pod_ready.go:92] pod "kube-apiserver-offline-containerd-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:01.230289 9171 pod_ready.go:81] duration metric: took 14.776227ms waiting for pod "kube-apiserver-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:01.230302 9171 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:01.241691 9171 pod_ready.go:92] pod "kube-controller-manager-offline-containerd-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:01.241717 9171 pod_ready.go:81] duration metric: took 11.405045ms waiting for pod "kube-controller-manager-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:01.241733 9171 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dhhrk" in "kube-system" namespace to be "Ready" ...
I0816 22:25:01.251986 9171 pod_ready.go:92] pod "kube-proxy-dhhrk" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:01.252017 9171 pod_ready.go:81] duration metric: took 10.275945ms waiting for pod "kube-proxy-dhhrk" in "kube-system" namespace to be "Ready" ...
I0816 22:25:01.252030 9171 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:01.580001 9171 pod_ready.go:92] pod "kube-scheduler-offline-containerd-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:01.580033 9171 pod_ready.go:81] duration metric: took 327.992243ms waiting for pod "kube-scheduler-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:01.580046 9171 pod_ready.go:38] duration metric: took 36.483444375s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0816 22:25:01.580071 9171 api_server.go:50] waiting for apiserver process to appear ...
I0816 22:25:01.580124 9171 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0816 22:25:01.597074 9171 api_server.go:70] duration metric: took 36.950719971s to wait for apiserver process to appear ...
I0816 22:25:01.597104 9171 api_server.go:86] waiting for apiserver healthz status ...
I0816 22:25:01.597117 9171 api_server.go:239] Checking apiserver healthz at https://192.168.105.22:8443/healthz ...
I0816 22:25:01.604325 9171 api_server.go:265] https://192.168.105.22:8443/healthz returned 200:
ok
I0816 22:25:01.606279 9171 api_server.go:139] control plane version: v1.21.3
I0816 22:25:01.606301 9171 api_server.go:129] duration metric: took 9.189625ms to wait for apiserver health ...
I0816 22:25:01.606312 9171 system_pods.go:43] waiting for kube-system pods to appear ...
I0816 22:25:01.788694 9171 system_pods.go:59] 7 kube-system pods found
I0816 22:25:01.788767 9171 system_pods.go:61] "coredns-558bd4d5db-jrjhw" [acdb9f4c-484e-4e02-97c3-368ce130507e] Running
I0816 22:25:01.788794 9171 system_pods.go:61] "etcd-offline-containerd-20210816222224-6986" [5cab4619-a033-47c0-9009-225ece0f2892] Running
I0816 22:25:01.788801 9171 system_pods.go:61] "kube-apiserver-offline-containerd-20210816222224-6986" [ea1abce8-a6d2-4e57-81c9-97bdd5eefea4] Running
I0816 22:25:01.788808 9171 system_pods.go:61] "kube-controller-manager-offline-containerd-20210816222224-6986" [9e75aa0c-4fd9-4812-9163-c6c1a26c9f2e] Running
I0816 22:25:01.788813 9171 system_pods.go:61] "kube-proxy-dhhrk" [a48ab7f9-7dfc-47de-8aca-c172bea7ff31] Running
I0816 22:25:01.788819 9171 system_pods.go:61] "kube-scheduler-offline-containerd-20210816222224-6986" [3dd47537-37cc-49f2-a469-8ef39825ba4a] Running
I0816 22:25:01.788827 9171 system_pods.go:61] "storage-provisioner" [e6290b9f-d87d-488d-8f9e-7cbbc59d9585] Running
I0816 22:25:01.788835 9171 system_pods.go:74] duration metric: took 182.517591ms to wait for pod list to return data ...
I0816 22:25:01.788850 9171 default_sa.go:34] waiting for default service account to be created ...
I0816 22:25:01.981356 9171 default_sa.go:45] found service account: "default"
I0816 22:25:01.981387 9171 default_sa.go:55] duration metric: took 192.530827ms for default service account to be created ...
I0816 22:25:01.981399 9171 system_pods.go:116] waiting for k8s-apps to be running ...
I0816 22:25:02.190487 9171 system_pods.go:86] 7 kube-system pods found
I0816 22:25:02.190528 9171 system_pods.go:89] "coredns-558bd4d5db-jrjhw" [acdb9f4c-484e-4e02-97c3-368ce130507e] Running
I0816 22:25:02.190538 9171 system_pods.go:89] "etcd-offline-containerd-20210816222224-6986" [5cab4619-a033-47c0-9009-225ece0f2892] Running
I0816 22:25:02.190546 9171 system_pods.go:89] "kube-apiserver-offline-containerd-20210816222224-6986" [ea1abce8-a6d2-4e57-81c9-97bdd5eefea4] Running
I0816 22:25:02.190554 9171 system_pods.go:89] "kube-controller-manager-offline-containerd-20210816222224-6986" [9e75aa0c-4fd9-4812-9163-c6c1a26c9f2e] Running
I0816 22:25:02.190560 9171 system_pods.go:89] "kube-proxy-dhhrk" [a48ab7f9-7dfc-47de-8aca-c172bea7ff31] Running
I0816 22:25:02.190567 9171 system_pods.go:89] "kube-scheduler-offline-containerd-20210816222224-6986" [3dd47537-37cc-49f2-a469-8ef39825ba4a] Running
I0816 22:25:02.190573 9171 system_pods.go:89] "storage-provisioner" [e6290b9f-d87d-488d-8f9e-7cbbc59d9585] Running
I0816 22:25:02.190582 9171 system_pods.go:126] duration metric: took 209.176198ms to wait for k8s-apps to be running ...
I0816 22:25:02.190596 9171 system_svc.go:44] waiting for kubelet service to be running ....
I0816 22:25:02.190648 9171 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0816 22:25:02.207959 9171 system_svc.go:56] duration metric: took 17.354686ms WaitForService to wait for kubelet.
I0816 22:25:02.207991 9171 kubeadm.go:547] duration metric: took 37.56164237s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0816 22:25:02.208036 9171 node_conditions.go:102] verifying NodePressure condition ...
I0816 22:25:02.385401 9171 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0816 22:25:02.385432 9171 node_conditions.go:123] node cpu capacity is 2
I0816 22:25:02.385444 9171 node_conditions.go:105] duration metric: took 177.399541ms to run NodePressure ...
I0816 22:25:02.385455 9171 start.go:231] waiting for startup goroutines ...
I0816 22:25:02.438114 9171 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
I0816 22:25:02.440691 9171 out.go:177] * Done! kubectl is now configured to use "offline-containerd-20210816222224-6986" cluster and "default" namespace by default
I0816 22:25:00.663954 10732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0816 22:25:00.664005 10732 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
I0816 22:25:00.674379 10732 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0816 22:25:00.699896 10732 system_pods.go:43] waiting for kube-system pods to appear ...
I0816 22:25:00.718704 10732 system_pods.go:59] 6 kube-system pods found
I0816 22:25:00.718763 10732 system_pods.go:61] "coredns-558bd4d5db-gkxhz" [5aa76749-775e-423d-bbf9-680a20a27051] Running
I0816 22:25:00.718780 10732 system_pods.go:61] "etcd-pause-20210816222224-6986" [f621b99e-0604-4bed-8c4e-4f5741e52f7b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0816 22:25:00.718802 10732 system_pods.go:61] "kube-apiserver-pause-20210816222224-6986" [b1c46709-4b0b-4c9c-a701-d595a58214ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0816 22:25:00.718811 10732 system_pods.go:61] "kube-controller-manager-pause-20210816222224-6986" [777c035e-5f34-469a-afb5-4f8ef90ccbfb] Running
I0816 22:25:00.718819 10732 system_pods.go:61] "kube-proxy-7l59t" [3c0e0899-31c1-477a-a6d4-2844091deea2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0816 22:25:00.718830 10732 system_pods.go:61] "kube-scheduler-pause-20210816222224-6986" [6b32acf9-8108-45a6-901e-70cd125190f8] Running
I0816 22:25:00.718838 10732 system_pods.go:74] duration metric: took 18.921493ms to wait for pod list to return data ...
I0816 22:25:00.718847 10732 node_conditions.go:102] verifying NodePressure condition ...
I0816 22:25:00.723789 10732 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0816 22:25:00.723820 10732 node_conditions.go:123] node cpu capacity is 2
I0816 22:25:00.723836 10732 node_conditions.go:105] duration metric: took 4.978152ms to run NodePressure ...
I0816 22:25:00.723854 10732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0816 22:25:01.396623 10732 kubeadm.go:731] waiting for restarted kubelet to initialise ...
I0816 22:25:01.403109 10732 kubeadm.go:746] kubelet initialised
I0816 22:25:01.403139 10732 kubeadm.go:747] duration metric: took 6.492031ms waiting for restarted kubelet to initialise ...
I0816 22:25:01.403151 10732 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0816 22:25:01.409386 10732 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
I0816 22:25:03.432924 10732 pod_ready.go:102] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"False"
I0816 22:25:05.435685 10732 pod_ready.go:102] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"False"
I0816 22:25:05.951433 10732 pod_ready.go:92] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:05.951457 10732 pod_ready.go:81] duration metric: took 4.542029801s waiting for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
I0816 22:25:05.951470 10732 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:06.969870 10732 pod_ready.go:92] pod "etcd-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:06.969903 10732 pod_ready.go:81] duration metric: took 1.018424787s waiting for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:06.969918 10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:06.978963 10732 pod_ready.go:92] pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:06.978984 10732 pod_ready.go:81] duration metric: took 9.058114ms waiting for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:06.978997 10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:07.986911 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | SSH cmd err, output: <nil>:
I0816 22:25:07.987289 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetConfigRaw
I0816 22:25:07.988117 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetIP
I0816 22:25:07.993471 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:07.993933 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:25:07.993970 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:07.994335 10879 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/config.json ...
I0816 22:25:07.994547 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
I0816 22:25:07.994761 10879 machine.go:88] provisioning docker machine ...
I0816 22:25:07.994788 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
I0816 22:25:07.994976 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetMachineName
I0816 22:25:07.995114 10879 buildroot.go:166] provisioning hostname "kubernetes-upgrade-20210816222225-6986"
I0816 22:25:07.995139 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetMachineName
I0816 22:25:07.995291 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
I0816 22:25:08.000173 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.000497 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:25:08.000524 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.000680 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
I0816 22:25:08.000825 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
I0816 22:25:08.000965 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
I0816 22:25:08.001081 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
I0816 22:25:08.001235 10879 main.go:130] libmachine: Using SSH client type: native
I0816 22:25:08.001401 10879 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil> [] 0s} 192.168.116.91 22 <nil> <nil>}
I0816 22:25:08.001421 10879 main.go:130] libmachine: About to run SSH command:
sudo hostname kubernetes-upgrade-20210816222225-6986 && echo "kubernetes-upgrade-20210816222225-6986" | sudo tee /etc/hostname
I0816 22:25:08.156978 10879 main.go:130] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.157018 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
I0816 22:25:08.162417 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.162702 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:25:08.162735 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.162864 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
I0816 22:25:08.163064 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
I0816 22:25:08.163277 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
I0816 22:25:08.163406 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
I0816 22:25:08.163558 10879 main.go:130] libmachine: Using SSH client type: native
I0816 22:25:08.163733 10879 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil> [] 0s} 192.168.116.91 22 <nil> <nil>}
I0816 22:25:08.163761 10879 main.go:130] libmachine: About to run SSH command:
if ! grep -xq '.*\skubernetes-upgrade-20210816222225-6986' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20210816222225-6986/g' /etc/hosts;
else
echo '127.0.1.1 kubernetes-upgrade-20210816222225-6986' | sudo tee -a /etc/hosts;
fi
fi
I0816 22:25:08.307005 10879 main.go:130] libmachine: SSH cmd err, output: <nil>:
I0816 22:25:08.307035 10879 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
I0816 22:25:08.307053 10879 buildroot.go:174] setting up certificates
I0816 22:25:08.307064 10879 provision.go:83] configureAuth start
I0816 22:25:08.307075 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetMachineName
I0816 22:25:08.307332 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetIP
I0816 22:25:08.313331 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.313697 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:25:08.313729 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.313896 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
I0816 22:25:08.318531 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.318844 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:25:08.318878 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.318990 10879 provision.go:138] copyHostCerts
I0816 22:25:08.319059 10879 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
I0816 22:25:08.319073 10879 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
I0816 22:25:08.319128 10879 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
I0816 22:25:08.319254 10879 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
I0816 22:25:08.319268 10879 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
I0816 22:25:08.319294 10879 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
I0816 22:25:08.319359 10879 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
I0816 22:25:08.319368 10879 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
I0816 22:25:08.319397 10879 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
I0816 22:25:08.319465 10879 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20210816222225-6986 san=[192.168.116.91 192.168.116.91 localhost 127.0.0.1 minikube kubernetes-upgrade-20210816222225-6986]
I0816 22:25:08.473458 10879 provision.go:172] copyRemoteCerts
I0816 22:25:08.473513 10879 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0816 22:25:08.473535 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
I0816 22:25:08.478720 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.479123 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:25:08.479157 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.479301 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
I0816 22:25:08.479517 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
I0816 22:25:08.479669 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
I0816 22:25:08.479802 10879 sshutil.go:53] new ssh client: &{IP:192.168.116.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa Username:docker}
I0816 22:25:08.575404 10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0816 22:25:08.593200 10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
I0816 22:25:08.611874 10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0816 22:25:08.631651 10879 provision.go:86] duration metric: configureAuth took 324.57656ms
I0816 22:25:08.631679 10879 buildroot.go:189] setting minikube options for container-runtime
I0816 22:25:08.631847 10879 config.go:177] Loaded profile config "kubernetes-upgrade-20210816222225-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
I0816 22:25:08.631862 10879 machine.go:91] provisioned docker machine in 637.081285ms
I0816 22:25:08.631877 10879 start.go:267] post-start starting for "kubernetes-upgrade-20210816222225-6986" (driver="kvm2")
I0816 22:25:08.631885 10879 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0816 22:25:08.631905 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
I0816 22:25:08.632222 10879 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0816 22:25:08.632262 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
I0816 22:25:08.638223 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.638599 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:25:08.638628 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.638804 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
I0816 22:25:08.639025 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
I0816 22:25:08.639186 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
I0816 22:25:08.639324 10879 sshutil.go:53] new ssh client: &{IP:192.168.116.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa Username:docker}
I0816 22:25:08.731490 10879 ssh_runner.go:149] Run: cat /etc/os-release
I0816 22:25:08.736384 10879 info.go:137] Remote host: Buildroot 2020.02.12
I0816 22:25:08.736415 10879 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
I0816 22:25:08.736479 10879 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
I0816 22:25:08.736640 10879 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
I0816 22:25:08.736796 10879 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
I0816 22:25:08.744563 10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
I0816 22:25:08.762219 10879 start.go:270] post-start completed in 130.327769ms
I0816 22:25:08.762269 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
I0816 22:25:08.762532 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
I0816 22:25:08.768066 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.768447 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:25:08.768479 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.768580 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
I0816 22:25:08.768764 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
I0816 22:25:08.768937 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
I0816 22:25:08.769097 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
I0816 22:25:08.769278 10879 main.go:130] libmachine: Using SSH client type: native
I0816 22:25:08.769412 10879 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil> [] 0s} 192.168.116.91 22 <nil> <nil>}
I0816 22:25:08.769423 10879 main.go:130] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0816 22:25:08.908369 10879 main.go:130] libmachine: SSH cmd err, output: <nil>: 1629152708.857933809
I0816 22:25:08.908397 10879 fix.go:212] guest clock: 1629152708.857933809
I0816 22:25:08.908407 10879 fix.go:225] Guest: 2021-08-16 22:25:08.857933809 +0000 UTC Remote: 2021-08-16 22:25:08.762514681 +0000 UTC m=+14.743694760 (delta=95.419128ms)
I0816 22:25:08.908465 10879 fix.go:196] guest clock delta is within tolerance: 95.419128ms
I0816 22:25:08.908473 10879 fix.go:57] fixHost completed within 14.610364111s
I0816 22:25:08.908483 10879 start.go:80] releasing machines lock for "kubernetes-upgrade-20210816222225-6986", held for 14.610387547s
I0816 22:25:08.908527 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
I0816 22:25:08.908801 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetIP
I0816 22:25:08.914888 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.915258 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:25:08.915290 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.915507 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
I0816 22:25:08.915732 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
I0816 22:25:08.916309 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
I0816 22:25:08.916592 10879 ssh_runner.go:149] Run: systemctl --version
I0816 22:25:08.916617 10879 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0816 22:25:08.916626 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
I0816 22:25:08.916658 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
I0816 22:25:08.923331 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.923688 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:25:08.923714 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.923808 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
I0816 22:25:08.923961 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
I0816 22:25:08.924114 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
I0816 22:25:08.924243 10879 sshutil.go:53] new ssh client: &{IP:192.168.116.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa Username:docker}
I0816 22:25:08.924528 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.924867 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:25:08.924898 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.925049 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
I0816 22:25:08.925209 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
I0816 22:25:08.925407 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
I0816 22:25:08.925534 10879 sshutil.go:53] new ssh client: &{IP:192.168.116.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa Username:docker}
I0816 22:25:09.022865 10879 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
I0816 22:25:09.023038 10879 ssh_runner.go:149] Run: sudo crictl images --output json
I0816 22:25:09.000201 10732 pod_ready.go:102] pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"False"
I0816 22:25:10.499577 10732 pod_ready.go:92] pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:10.499613 10732 pod_ready.go:81] duration metric: took 3.520603411s waiting for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.499631 10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.508715 10732 pod_ready.go:92] pod "kube-proxy-7l59t" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:10.508738 10732 pod_ready.go:81] duration metric: took 9.098529ms waiting for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.508749 10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.514516 10732 pod_ready.go:92] pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:10.514536 10732 pod_ready.go:81] duration metric: took 5.779042ms waiting for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.514546 10732 pod_ready.go:38] duration metric: took 9.111379533s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0816 22:25:10.514567 10732 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0816 22:25:10.530219 10732 ops.go:34] apiserver oom_adj: -16
I0816 22:25:10.530242 10732 kubeadm.go:604] restartCluster took 31.19958524s
I0816 22:25:10.530251 10732 kubeadm.go:392] StartCluster complete in 31.557512009s
I0816 22:25:10.530271 10732 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0816 22:25:10.530404 10732 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
I0816 22:25:10.531238 10732 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0816 22:25:10.532000 10732 kapi.go:59] client config for pause-20210816222224-6986: &rest.Config{Host:"https://192.168.50.226:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0816 22:25:10.647656 10732 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210816222224-6986" rescaled to 1
I0816 22:25:10.647728 10732 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.50.226 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
I0816 22:25:10.647757 10732 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0816 22:25:10.647794 10732 addons.go:342] enableAddons start: toEnable=map[], additional=[]
I0816 22:25:10.649327 10732 out.go:177] * Verifying Kubernetes components...
I0816 22:25:10.649398 10732 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0816 22:25:10.647852 10732 addons.go:59] Setting storage-provisioner=true in profile "pause-20210816222224-6986"
I0816 22:25:10.647862 10732 addons.go:59] Setting default-storageclass=true in profile "pause-20210816222224-6986"
I0816 22:25:10.647991 10732 config.go:177] Loaded profile config "pause-20210816222224-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
I0816 22:25:10.649480 10732 addons.go:135] Setting addon storage-provisioner=true in "pause-20210816222224-6986"
W0816 22:25:10.649500 10732 addons.go:147] addon storage-provisioner should already be in state true
I0816 22:25:10.649516 10732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210816222224-6986"
I0816 22:25:10.649532 10732 host.go:66] Checking if "pause-20210816222224-6986" exists ...
I0816 22:25:10.650748 10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:25:10.650827 10732 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:25:10.653189 10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:25:10.653249 10732 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:25:10.664888 10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45461
I0816 22:25:10.665365 10732 main.go:130] libmachine: () Calling .GetVersion
I0816 22:25:10.665893 10732 main.go:130] libmachine: Using API Version 1
I0816 22:25:10.665915 10732 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:25:10.666315 10732 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:25:10.666493 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
I0816 22:25:10.667827 10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34733
I0816 22:25:10.668293 10732 main.go:130] libmachine: () Calling .GetVersion
I0816 22:25:10.668762 10732 main.go:130] libmachine: Using API Version 1
I0816 22:25:10.668782 10732 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:25:10.669202 10732 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:25:10.669761 10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:25:10.669802 10732 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:25:10.670861 10732 kapi.go:59] client config for pause-20210816222224-6986: &rest.Config{Host:"https://192.168.50.226:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0816 22:25:10.676486 10732 addons.go:135] Setting addon default-storageclass=true in "pause-20210816222224-6986"
W0816 22:25:10.676510 10732 addons.go:147] addon default-storageclass should already be in state true
I0816 22:25:10.676539 10732 host.go:66] Checking if "pause-20210816222224-6986" exists ...
I0816 22:25:10.676985 10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:25:10.677031 10732 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:25:10.682317 10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39313
I0816 22:25:10.682805 10732 main.go:130] libmachine: () Calling .GetVersion
I0816 22:25:10.683360 10732 main.go:130] libmachine: Using API Version 1
I0816 22:25:10.683382 10732 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:25:10.683737 10732 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:25:10.683924 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
I0816 22:25:10.687519 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
I0816 22:25:10.693597 10732 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0816 22:25:10.693708 10732 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0816 22:25:10.693722 10732 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0816 22:25:10.693742 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
I0816 22:25:10.692712 10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45043
I0816 22:25:10.694563 10732 main.go:130] libmachine: () Calling .GetVersion
I0816 22:25:10.695082 10732 main.go:130] libmachine: Using API Version 1
I0816 22:25:10.695103 10732 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:25:10.695455 10732 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:25:10.696063 10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:25:10.696115 10732 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:25:10.700367 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:25:10.700792 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
I0816 22:25:10.700813 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:25:10.701111 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
I0816 22:25:10.701350 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
I0816 22:25:10.701537 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
I0816 22:25:10.701730 10732 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
I0816 22:25:10.709887 10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33339
I0816 22:25:10.710304 10732 main.go:130] libmachine: () Calling .GetVersion
I0816 22:25:10.710912 10732 main.go:130] libmachine: Using API Version 1
I0816 22:25:10.710938 10732 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:25:10.711336 10732 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:25:10.711547 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
I0816 22:25:10.714430 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
I0816 22:25:10.714683 10732 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
I0816 22:25:10.714702 10732 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0816 22:25:10.714720 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
I0816 22:25:10.720808 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:25:10.721319 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
I0816 22:25:10.721342 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:25:10.721485 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
I0816 22:25:10.721643 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
I0816 22:25:10.721769 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
I0816 22:25:10.721919 10732 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
I0816 22:25:10.832212 10732 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0816 22:25:10.862755 10732 node_ready.go:35] waiting up to 6m0s for node "pause-20210816222224-6986" to be "Ready" ...
I0816 22:25:10.863120 10732 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0816 22:25:10.867110 10732 node_ready.go:49] node "pause-20210816222224-6986" has status "Ready":"True"
I0816 22:25:10.867130 10732 node_ready.go:38] duration metric: took 4.344058ms waiting for node "pause-20210816222224-6986" to be "Ready" ...
I0816 22:25:10.867143 10732 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0816 22:25:10.883113 10732 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.892065 10732 pod_ready.go:92] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:10.892084 10732 pod_ready.go:81] duration metric: took 8.944517ms waiting for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.892096 10732 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.895462 10732 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0816 22:25:11.127716 10732 pod_ready.go:92] pod "etcd-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:11.127749 10732 pod_ready.go:81] duration metric: took 235.644563ms waiting for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:11.127765 10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:11.536655 10732 pod_ready.go:92] pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:11.536676 10732 pod_ready.go:81] duration metric: took 408.901449ms waiting for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:11.536690 10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:11.539596 10732 main.go:130] libmachine: Making call to close driver server
I0816 22:25:11.539618 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
I0816 22:25:11.539697 10732 main.go:130] libmachine: Making call to close driver server
I0816 22:25:11.539725 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
I0816 22:25:11.540009 10732 main.go:130] libmachine: Successfully made call to close driver server
I0816 22:25:11.540024 10732 main.go:130] libmachine: Making call to close connection to plugin binary
I0816 22:25:11.540041 10732 main.go:130] libmachine: Making call to close driver server
I0816 22:25:11.540041 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | Closing plugin on server side
I0816 22:25:11.540051 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
I0816 22:25:11.540067 10732 main.go:130] libmachine: Successfully made call to close driver server
I0816 22:25:11.540075 10732 main.go:130] libmachine: Making call to close connection to plugin binary
I0816 22:25:11.540083 10732 main.go:130] libmachine: Making call to close driver server
I0816 22:25:11.540092 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
I0816 22:25:11.540126 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | Closing plugin on server side
I0816 22:25:11.540298 10732 main.go:130] libmachine: Successfully made call to close driver server
I0816 22:25:11.540310 10732 main.go:130] libmachine: Making call to close connection to plugin binary
I0816 22:25:11.540320 10732 main.go:130] libmachine: Making call to close driver server
I0816 22:25:11.540329 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
I0816 22:25:11.540417 10732 main.go:130] libmachine: Successfully made call to close driver server
I0816 22:25:11.540429 10732 main.go:130] libmachine: Making call to close connection to plugin binary
I0816 22:25:11.540490 10732 main.go:130] libmachine: Successfully made call to close driver server
I0816 22:25:11.540502 10732 main.go:130] libmachine: Making call to close connection to plugin binary
I0816 22:25:11.542638 10732 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0816 22:25:11.542662 10732 addons.go:344] enableAddons completed in 894.875902ms
I0816 22:25:11.931820 10732 pod_ready.go:92] pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:11.931845 10732 pod_ready.go:81] duration metric: took 395.147421ms waiting for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:11.931860 10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
I0816 22:25:12.329464 10732 pod_ready.go:92] pod "kube-proxy-7l59t" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:12.329493 10732 pod_ready.go:81] duration metric: took 397.623774ms waiting for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
I0816 22:25:12.329507 10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:12.734335 10732 pod_ready.go:92] pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:12.734360 10732 pod_ready.go:81] duration metric: took 404.844565ms waiting for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:12.734374 10732 pod_ready.go:38] duration metric: took 1.867218741s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0816 22:25:12.734394 10732 api_server.go:50] waiting for apiserver process to appear ...
I0816 22:25:12.734439 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0816 22:25:12.754510 10732 api_server.go:70] duration metric: took 2.106745047s to wait for apiserver process to appear ...
I0816 22:25:12.754540 10732 api_server.go:86] waiting for apiserver healthz status ...
I0816 22:25:12.754553 10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
I0816 22:25:12.792067 10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 200:
ok
I0816 22:25:12.794542 10732 api_server.go:139] control plane version: v1.21.3
I0816 22:25:12.794565 10732 api_server.go:129] duration metric: took 40.01886ms to wait for apiserver health ...
I0816 22:25:12.794577 10732 system_pods.go:43] waiting for kube-system pods to appear ...
I0816 22:25:12.941013 10732 system_pods.go:59] 7 kube-system pods found
I0816 22:25:12.941048 10732 system_pods.go:61] "coredns-558bd4d5db-gkxhz" [5aa76749-775e-423d-bbf9-680a20a27051] Running
I0816 22:25:12.941053 10732 system_pods.go:61] "etcd-pause-20210816222224-6986" [f621b99e-0604-4bed-8c4e-4f5741e52f7b] Running
I0816 22:25:12.941057 10732 system_pods.go:61] "kube-apiserver-pause-20210816222224-6986" [b1c46709-4b0b-4c9c-a701-d595a58214ba] Running
I0816 22:25:12.941102 10732 system_pods.go:61] "kube-controller-manager-pause-20210816222224-6986" [777c035e-5f34-469a-afb5-4f8ef90ccbfb] Running
I0816 22:25:12.941116 10732 system_pods.go:61] "kube-proxy-7l59t" [3c0e0899-31c1-477a-a6d4-2844091deea2] Running
I0816 22:25:12.941122 10732 system_pods.go:61] "kube-scheduler-pause-20210816222224-6986" [6b32acf9-8108-45a6-901e-70cd125190f8] Running
I0816 22:25:12.941136 10732 system_pods.go:61] "storage-provisioner" [4f138dc7-da0e-4775-b4de-b0f7d616b212] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0816 22:25:12.941158 10732 system_pods.go:74] duration metric: took 146.575596ms to wait for pod list to return data ...
I0816 22:25:12.941176 10732 default_sa.go:34] waiting for default service account to be created ...
I0816 22:25:13.132349 10732 default_sa.go:45] found service account: "default"
I0816 22:25:13.132381 10732 default_sa.go:55] duration metric: took 191.195172ms for default service account to be created ...
I0816 22:25:13.132394 10732 system_pods.go:116] waiting for k8s-apps to be running ...
I0816 22:25:13.340094 10732 system_pods.go:86] 7 kube-system pods found
I0816 22:25:13.340135 10732 system_pods.go:89] "coredns-558bd4d5db-gkxhz" [5aa76749-775e-423d-bbf9-680a20a27051] Running
I0816 22:25:13.340146 10732 system_pods.go:89] "etcd-pause-20210816222224-6986" [f621b99e-0604-4bed-8c4e-4f5741e52f7b] Running
I0816 22:25:13.340155 10732 system_pods.go:89] "kube-apiserver-pause-20210816222224-6986" [b1c46709-4b0b-4c9c-a701-d595a58214ba] Running
I0816 22:25:13.340163 10732 system_pods.go:89] "kube-controller-manager-pause-20210816222224-6986" [777c035e-5f34-469a-afb5-4f8ef90ccbfb] Running
I0816 22:25:13.340172 10732 system_pods.go:89] "kube-proxy-7l59t" [3c0e0899-31c1-477a-a6d4-2844091deea2] Running
I0816 22:25:13.340184 10732 system_pods.go:89] "kube-scheduler-pause-20210816222224-6986" [6b32acf9-8108-45a6-901e-70cd125190f8] Running
I0816 22:25:13.340196 10732 system_pods.go:89] "storage-provisioner" [4f138dc7-da0e-4775-b4de-b0f7d616b212] Running
I0816 22:25:13.340210 10732 system_pods.go:126] duration metric: took 207.809217ms to wait for k8s-apps to be running ...
I0816 22:25:13.340225 10732 system_svc.go:44] waiting for kubelet service to be running ....
I0816 22:25:13.340279 10732 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0816 22:25:13.358716 10732 system_svc.go:56] duration metric: took 18.47804ms WaitForService to wait for kubelet.
I0816 22:25:13.358752 10732 kubeadm.go:547] duration metric: took 2.710991068s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0816 22:25:13.358785 10732 node_conditions.go:102] verifying NodePressure condition ...
I0816 22:25:13.536797 10732 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0816 22:25:13.536830 10732 node_conditions.go:123] node cpu capacity is 2
I0816 22:25:13.536848 10732 node_conditions.go:105] duration metric: took 178.056493ms to run NodePressure ...
I0816 22:25:13.536863 10732 start.go:231] waiting for startup goroutines ...
I0816 22:25:13.602415 10732 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
I0816 22:25:13.604425 10732 out.go:177] * Done! kubectl is now configured to use "pause-20210816222224-6986" cluster and "default" namespace by default
I0816 22:25:13.045168 10879 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.02209826s)
I0816 22:25:13.045290 10879 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
I0816 22:25:13.045383 10879 ssh_runner.go:149] Run: which lz4
I0816 22:25:13.050542 10879 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0816 22:25:13.055627 10879 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I0816 22:25:13.055661 10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (945588089 bytes)
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
f04c445038901 6e38f40d628db 2 seconds ago Running storage-provisioner 0 0e10d9862204b
e70dd80568a0a 296a6d5035e2d 13 seconds ago Running coredns 1 c649190b7c07d
2585772c8a261 adb2816ea823a 14 seconds ago Running kube-proxy 2 d73b4cafe25f0
53780b2759956 3d174f00aa39e 21 seconds ago Running kube-apiserver 2 fb9f201b2c2e1
76fef890edebe 6be0dc1302e30 21 seconds ago Running kube-scheduler 2 1718d2a0276ce
69a7fab4848c4 0369cf4303ffd 21 seconds ago Running etcd 2 3b9459ff3a0d8
825e79d62718c bc2bb319a7038 22 seconds ago Running kube-controller-manager 2 feab707eb735a
7626b842ef886 3d174f00aa39e 22 seconds ago Created kube-apiserver 1 fb9f201b2c2e1
9d9f34b35e099 adb2816ea823a 22 seconds ago Created kube-proxy 1 d73b4cafe25f0
97c4cc3614116 6be0dc1302e30 22 seconds ago Created kube-scheduler 1 1718d2a0276ce
3644e35e40a2f 0369cf4303ffd 22 seconds ago Created etcd 1 3b9459ff3a0d8
8c5f2c007cff4 bc2bb319a7038 26 seconds ago Created kube-controller-manager 1 feab707eb735a
28c7161cd49a4 296a6d5035e2d About a minute ago Exited coredns 0 05c2427240818
a8503bd796d5d adb2816ea823a About a minute ago Exited kube-proxy 0 a86c3b6ee3a70
124fa393359f7 0369cf4303ffd 2 minutes ago Exited etcd 0 94a493a65b593
8710cefecdbe5 6be0dc1302e30 2 minutes ago Exited kube-scheduler 0 982e66890a90d
38dc61b214a9c 3d174f00aa39e 2 minutes ago Exited kube-apiserver 0 630ed9d4644e9
*
* ==> containerd <==
* -- Logs begin at Mon 2021-08-16 22:22:35 UTC, end at Mon 2021-08-16 22:25:15 UTC. --
Aug 16 22:24:53 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:24:53.606374155Z" level=info msg="StartContainer for \"69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549\" returns successfully"
Aug 16 22:24:53 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:24:53.687984942Z" level=info msg="StartContainer for \"76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1\" returns successfully"
Aug 16 22:24:59 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:24:59.121993631Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.146522428Z" level=info msg="CreateContainer within sandbox \"d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:2,}"
Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.231610260Z" level=info msg="CreateContainer within sandbox \"d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8\" for &ContainerMetadata{Name:kube-proxy,Attempt:2,} returns container id \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\""
Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.233198734Z" level=info msg="StartContainer for \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\""
Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.443953465Z" level=info msg="StopPodSandbox for \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\""
Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.444769081Z" level=info msg="Container to stop \"28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.461877463Z" level=info msg="StartContainer for \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\" returns successfully"
Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536079353Z" level=info msg="TearDown network for sandbox \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\" successfully"
Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536191167Z" level=info msg="StopPodSandbox for \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\" returns successfully"
Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536962082Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-gkxhz,Uid:5aa76749-775e-423d-bbf9-680a20a27051,Namespace:kube-system,Attempt:1,}"
Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.776744568Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb pid=5007
Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.290447333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-gkxhz,Uid:5aa76749-775e-423d-bbf9-680a20a27051,Namespace:kube-system,Attempt:1,} returns sandbox id \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\""
Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.300600113Z" level=info msg="CreateContainer within sandbox \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.389478760Z" level=info msg="CreateContainer within sandbox \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\""
Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.397162604Z" level=info msg="StartContainer for \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\""
Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.594046909Z" level=info msg="StartContainer for \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\" returns successfully"
Aug 16 22:25:11 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:11.852957632Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:4f138dc7-da0e-4775-b4de-b0f7d616b212,Namespace:kube-system,Attempt:0,}"
Aug 16 22:25:11 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:11.903771908Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c pid=5174
Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.439549893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:4f138dc7-da0e-4775-b4de-b0f7d616b212,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\""
Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.451930506Z" level=info msg="CreateContainer within sandbox \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.521875733Z" level=info msg="CreateContainer within sandbox \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.523292924Z" level=info msg="StartContainer for \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.851898064Z" level=info msg="StartContainer for \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\" returns successfully"
*
* ==> coredns [28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.8.0
linux/amd64, go1.15.3, 054c9ae
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] Reloading
[INFO] plugin/health: Going into lameduck mode for 5s
I0816 22:24:19.170128 1 trace.go:205] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.168) (total time: 30001ms):
Trace[2019727887]: [30.001909435s] [30.001909435s] END
E0816 22:24:19.170279 1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0816 22:24:19.171047 1 trace.go:205] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.166) (total time: 30004ms):
Trace[939984059]: [30.004733433s] [30.004733433s] END
E0816 22:24:19.171149 1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0816 22:24:19.171258 1 trace.go:205] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.166) (total time: 30004ms):
Trace[911902081]: [30.004945736s] [30.004945736s] END
E0816 22:24:19.171265 1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
*
* ==> coredns [e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = 7ae91e86dd75dee9ae501cb58003198b
CoreDNS-1.8.0
linux/amd64, go1.15.3, 054c9ae
*
* ==> describe nodes <==
* Name: pause-20210816222224-6986
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-20210816222224-6986
kubernetes.io/os=linux
minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
minikube.k8s.io/name=pause-20210816222224-6986
minikube.k8s.io/updated_at=2021_08_16T22_23_26_0700
minikube.k8s.io/version=v1.22.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 16 Aug 2021 22:23:23 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-20210816222224-6986
AcquireTime: <unset>
RenewTime: Mon, 16 Aug 2021 22:25:09 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 16 Aug 2021 22:24:59 +0000 Mon, 16 Aug 2021 22:23:18 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 16 Aug 2021 22:24:59 +0000 Mon, 16 Aug 2021 22:23:18 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 16 Aug 2021 22:24:59 +0000 Mon, 16 Aug 2021 22:23:18 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 16 Aug 2021 22:24:59 +0000 Mon, 16 Aug 2021 22:23:42 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.50.226
Hostname: pause-20210816222224-6986
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2033044Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2033044Ki
pods: 110
System Info:
Machine ID: 940ad300f94c41e2a0b0cde81be11541
System UUID: 940ad300-f94c-41e2-a0b0-cde81be11541
Boot ID: ea001a4b-e783-4f93-b7d3-bb910eb45d3c
Kernel Version: 4.19.182
OS Image: Buildroot 2020.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.4.9
Kubelet Version: v1.21.3
Kube-Proxy Version: v1.21.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-558bd4d5db-gkxhz 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 89s
kube-system etcd-pause-20210816222224-6986 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 111s
kube-system kube-apiserver-pause-20210816222224-6986 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 109s
kube-system kube-controller-manager-pause-20210816222224-6986 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 103s
kube-system kube-proxy-7l59t 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 89s
kube-system kube-scheduler-pause-20210816222224-6986 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 103s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 2m3s (x6 over 2m4s) kubelet Node pause-20210816222224-6986 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m3s (x5 over 2m4s) kubelet Node pause-20210816222224-6986 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m3s (x5 over 2m4s) kubelet Node pause-20210816222224-6986 status is now: NodeHasSufficientPID
Normal Starting 103s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 103s kubelet Node pause-20210816222224-6986 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 103s kubelet Node pause-20210816222224-6986 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 103s kubelet Node pause-20210816222224-6986 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 103s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 93s kubelet Node pause-20210816222224-6986 status is now: NodeReady
Normal Starting 86s kube-proxy Starting kube-proxy.
Normal Starting 24s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 24s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 23s (x8 over 24s) kubelet Node pause-20210816222224-6986 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 23s (x8 over 24s) kubelet Node pause-20210816222224-6986 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 23s (x7 over 24s) kubelet Node pause-20210816222224-6986 status is now: NodeHasSufficientPID
Normal Starting 15s kube-proxy Starting kube-proxy.
*
* ==> dmesg <==
* [ +3.181431] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
[ +0.036573] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +0.985023] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1731 comm=systemd-network
[ +1.088197] vboxguest: loading out-of-tree module taints kernel.
[ +0.006251] vboxguest: PCI device not found, probably running on physical hardware.
[ +1.889854] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +16.286436] systemd-fstab-generator[2098]: Ignoring "noauto" for root device
[ +0.258185] systemd-fstab-generator[2128]: Ignoring "noauto" for root device
[ +0.135377] systemd-fstab-generator[2143]: Ignoring "noauto" for root device
[ +0.180446] systemd-fstab-generator[2173]: Ignoring "noauto" for root device
[Aug16 22:23] systemd-fstab-generator[2381]: Ignoring "noauto" for root device
[ +20.504547] systemd-fstab-generator[2808]: Ignoring "noauto" for root device
[ +20.717915] kauditd_printk_skb: 38 callbacks suppressed
[ +5.551219] kauditd_printk_skb: 104 callbacks suppressed
[Aug16 22:24] kauditd_printk_skb: 2 callbacks suppressed
[ +6.792051] systemd-fstab-generator[3754]: Ignoring "noauto" for root device
[ +0.176916] systemd-fstab-generator[3767]: Ignoring "noauto" for root device
[ +0.230657] systemd-fstab-generator[3792]: Ignoring "noauto" for root device
[ +4.083098] kauditd_printk_skb: 2 callbacks suppressed
[ +3.840195] NFSD: Unable to end grace period: -110
[ +4.324119] systemd-fstab-generator[4543]: Ignoring "noauto" for root device
[ +6.680726] kauditd_printk_skb: 29 callbacks suppressed
[Aug16 22:25] kauditd_printk_skb: 14 callbacks suppressed
[ +12.641213] kauditd_printk_skb: 23 callbacks suppressed
*
* ==> etcd [124fa393359f758ea47161b345d2cab4ce486d4473a4caad483449464d44315f] <==
* 2021-08-16 22:23:41.064197 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210816222224-6986\" " with result "range_response_count:1 size:5052" took too long (6.421187445s) to execute
2021-08-16 22:23:41.065847 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:341" took too long (6.446897155s) to execute
2021-08-16 22:23:41.066285 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/node-controller\" " with result "range_response_count:1 size:242" took too long (5.09674902s) to execute
2021-08-16 22:23:41.068005 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (6.28196539s) to execute
2021-08-16 22:23:41.068259 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (763.710719ms) to execute
2021-08-16 22:23:41.880435 W | etcdserver: read-only range request "key:\"/registry/masterleases/192.168.50.226\" " with result "range_response_count:0 size:5" took too long (776.335267ms) to execute
2021-08-16 22:23:41.881080 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (597.366064ms) to execute
2021-08-16 22:23:41.882354 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:4569" took too long (763.841142ms) to execute
2021-08-16 22:23:41.883287 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (621.677263ms) to execute
2021-08-16 22:23:41.884722 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (481.499599ms) to execute
2021-08-16 22:23:41.885189 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-pause-20210816222224-6986\" " with result "range_response_count:1 size:5421" took too long (772.180278ms) to execute
2021-08-16 22:23:42.453217 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (290.061418ms) to execute
2021-08-16 22:23:42.455427 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (285.893643ms) to execute
2021-08-16 22:23:42.456943 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-pause-20210816222224-6986\" " with result "range_response_count:1 size:6314" took too long (153.946258ms) to execute
2021-08-16 22:23:42.458024 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (177.825431ms) to execute
2021-08-16 22:23:44.267832 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-16 22:23:54.092150 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (701.802797ms) to execute
2021-08-16 22:23:54.093518 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-ctgf5\" " with result "range_response_count:1 size:4473" took too long (1.090386256s) to execute
2021-08-16 22:23:54.267392 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-16 22:23:57.768234 W | etcdserver: request "header:<ID:4263355585347158035 > lease_revoke:<id:3b2a7b510fcb7e67>" with result "size:29" took too long (771.90226ms) to execute
2021-08-16 22:23:57.768903 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (374.444829ms) to execute
2021-08-16 22:23:57.769379 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-ctgf5\" " with result "range_response_count:1 size:4473" took too long (765.115046ms) to execute
2021-08-16 22:24:04.267548 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-16 22:24:14.267958 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-16 22:24:24.268321 I | etcdserver/api/etcdhttp: /health OK (status code 200)
*
* ==> etcd [3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423] <==
*
* ==> etcd [69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549] <==
* raft2021/08/16 22:24:53 INFO: newRaft e840193bf29c3b2a [peers: [], term: 2, commit: 515, applied: 0, lastindex: 515, lastterm: 2]
2021-08-16 22:24:53.773065 W | auth: simple token is not cryptographically signed
2021-08-16 22:24:53.837118 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
raft2021/08/16 22:24:53 INFO: e840193bf29c3b2a switched to configuration voters=(16735403960572853034)
2021-08-16 22:24:53.849298 I | etcdserver/membership: added member e840193bf29c3b2a [https://192.168.50.226:2380] to cluster 99b90e1bea73c730
2021-08-16 22:24:53.860198 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2021-08-16 22:24:53.864997 I | embed: listening for metrics on http://127.0.0.1:2381
2021-08-16 22:24:53.865214 I | embed: listening for peers on 192.168.50.226:2380
2021-08-16 22:24:53.868083 N | etcdserver/membership: set the initial cluster version to 3.4
2021-08-16 22:24:53.871735 I | etcdserver/api: enabled capabilities for version 3.4
raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a is starting a new election at term 2
raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a became candidate at term 3
raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a received MsgVoteResp from e840193bf29c3b2a at term 3
raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a became leader at term 3
raft2021/08/16 22:24:54 INFO: raft.node: e840193bf29c3b2a elected leader e840193bf29c3b2a at term 3
2021-08-16 22:24:54.968820 I | embed: ready to serve client requests
2021-08-16 22:24:54.969394 I | etcdserver: published {Name:pause-20210816222224-6986 ClientURLs:[https://192.168.50.226:2379]} to cluster 99b90e1bea73c730
2021-08-16 22:24:54.971284 I | embed: serving client requests on 192.168.50.226:2379
2021-08-16 22:24:54.971462 I | embed: ready to serve client requests
2021-08-16 22:24:54.973508 I | embed: serving client requests on 127.0.0.1:2379
2021-08-16 22:25:03.067902 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-gkxhz\" " with result "range_response_count:1 size:4860" took too long (140.807991ms) to execute
2021-08-16 22:25:06.747736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-16 22:25:08.138740 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-16 22:25:10.645123 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:3838" took too long (108.124514ms) to execute
2021-08-16 22:25:10.645989 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:665" took too long (107.967343ms) to execute
*
* ==> kernel <==
* 22:25:15 up 2 min, 0 users, load average: 3.55, 1.59, 0.61
Linux pause-20210816222224-6986 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2020.02.12"
*
* ==> kube-apiserver [38dc61b214a9cbd019de4ca9ab52fb6baf728336de6d715df22b027522ad8b20] <==
* I0816 22:23:41.890272 1 trace.go:205] Trace[914939944]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:23:41.117) (total time: 772ms):
Trace[914939944]: [772.29448ms] [772.29448ms] END
I0816 22:23:41.897880 1 trace.go:205] Trace[372773048]: "List" url:/api/v1/nodes,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/system:serviceaccount:kube-system:node-controller,client:192.168.50.226,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:41.117) (total time: 780ms):
Trace[372773048]: ---"Listing from storage done" 773ms (22:23:00.891)
Trace[372773048]: [780.024685ms] [780.024685ms] END
I0816 22:23:41.899245 1 trace.go:205] Trace[189474875]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-20210816222224-6986,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.50.226,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (16-Aug-2021 22:23:41.107) (total time: 791ms):
Trace[189474875]: ---"About to write a response" 791ms (22:23:00.899)
Trace[189474875]: [791.769473ms] [791.769473ms] END
I0816 22:23:41.914143 1 trace.go:205] Trace[1803257945]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (16-Aug-2021 22:23:41.101) (total time: 812ms):
Trace[1803257945]: ---"initial value restored" 795ms (22:23:00.897)
Trace[1803257945]: [812.099383ms] [812.099383ms] END
I0816 22:23:46.219827 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
I0816 22:23:46.322056 1 controller.go:611] quota admission added evaluator for: replicasets.apps
I0816 22:23:54.101003 1 trace.go:205] Trace[1429856954]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-ctgf5,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.50.1,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:53.002) (total time: 1098ms):
Trace[1429856954]: ---"About to write a response" 1098ms (22:23:00.100)
Trace[1429856954]: [1.0988209s] [1.0988209s] END
I0816 22:23:56.194218 1 client.go:360] parsed scheme: "passthrough"
I0816 22:23:56.194943 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0816 22:23:56.195388 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0816 22:23:57.770900 1 trace.go:205] Trace[2103117378]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-ctgf5,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.50.1,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:57.002) (total time: 767ms):
Trace[2103117378]: ---"About to write a response" 767ms (22:23:00.770)
Trace[2103117378]: [767.944134ms] [767.944134ms] END
I0816 22:24:32.818404 1 client.go:360] parsed scheme: "passthrough"
I0816 22:24:32.818597 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0816 22:24:32.818691 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
*
* ==> kube-apiserver [53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf] <==
* I0816 22:24:59.052878 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0816 22:24:59.052897 1 crd_finalizer.go:266] Starting CRDFinalizer
I0816 22:24:59.071128 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0816 22:24:59.071704 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0816 22:24:59.072328 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0816 22:24:59.072872 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I0816 22:24:59.173327 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I0816 22:24:59.176720 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
E0816 22:24:59.181278 1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I0816 22:24:59.206356 1 shared_informer.go:247] Caches are synced for node_authorizer
I0816 22:24:59.225165 1 cache.go:39] Caches are synced for autoregister controller
I0816 22:24:59.227741 1 apf_controller.go:299] Running API Priority and Fairness config worker
I0816 22:24:59.230223 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0816 22:24:59.244026 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0816 22:24:59.248943 1 shared_informer.go:247] Caches are synced for crd-autoregister
I0816 22:25:00.021310 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0816 22:25:00.022052 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0816 22:25:00.034218 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0816 22:25:01.108795 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0816 22:25:01.182177 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0816 22:25:01.279321 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0816 22:25:01.344553 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0816 22:25:01.382891 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0816 22:25:11.471022 1 controller.go:611] quota admission added evaluator for: endpoints
I0816 22:25:13.002505 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
*
* ==> kube-apiserver [7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612] <==
*
* ==> kube-controller-manager [825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7] <==
* I0816 22:25:12.900492 1 shared_informer.go:247] Caches are synced for GC
I0816 22:25:12.900735 1 shared_informer.go:247] Caches are synced for job
I0816 22:25:12.908539 1 shared_informer.go:247] Caches are synced for ReplicaSet
I0816 22:25:12.910182 1 shared_informer.go:247] Caches are synced for persistent volume
I0816 22:25:12.925990 1 shared_informer.go:247] Caches are synced for stateful set
I0816 22:25:12.926195 1 shared_informer.go:247] Caches are synced for HPA
I0816 22:25:12.931999 1 shared_informer.go:247] Caches are synced for attach detach
I0816 22:25:12.933971 1 shared_informer.go:247] Caches are synced for PVC protection
I0816 22:25:12.934151 1 shared_informer.go:247] Caches are synced for deployment
I0816 22:25:12.943776 1 shared_informer.go:247] Caches are synced for ephemeral
I0816 22:25:12.963727 1 shared_informer.go:247] Caches are synced for endpoint_slice
I0816 22:25:12.969209 1 shared_informer.go:247] Caches are synced for taint
I0816 22:25:12.969381 1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone:
W0816 22:25:12.969524 1 node_lifecycle_controller.go:1013] Missing timestamp for Node pause-20210816222224-6986. Assuming now as a timestamp.
I0816 22:25:12.969564 1 node_lifecycle_controller.go:1214] Controller detected that zone is now in state Normal.
I0816 22:25:12.970457 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
I0816 22:25:12.970831 1 event.go:291] "Event occurred" object="pause-20210816222224-6986" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20210816222224-6986 event: Registered Node pause-20210816222224-6986 in Controller"
I0816 22:25:12.974749 1 shared_informer.go:247] Caches are synced for endpoint
I0816 22:25:13.000548 1 shared_informer.go:247] Caches are synced for disruption
I0816 22:25:13.000739 1 disruption.go:371] Sending events to api server.
I0816 22:25:13.004608 1 shared_informer.go:247] Caches are synced for resource quota
I0816 22:25:13.016848 1 shared_informer.go:247] Caches are synced for resource quota
I0816 22:25:13.386564 1 shared_informer.go:247] Caches are synced for garbage collector
I0816 22:25:13.386597 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0816 22:25:13.440139 1 shared_informer.go:247] Caches are synced for garbage collector
*
* ==> kube-controller-manager [8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4] <==
*
* ==> kube-proxy [2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2] <==
* I0816 22:25:00.641886 1 node.go:172] Successfully retrieved node IP: 192.168.50.226
I0816 22:25:00.641938 1 server_others.go:140] Detected node IP 192.168.50.226
W0816 22:25:00.642012 1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
W0816 22:25:00.805515 1 server_others.go:197] No iptables support for IPv6: exit status 3
I0816 22:25:00.805539 1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
I0816 22:25:00.805560 1 server_others.go:212] Using iptables Proxier.
I0816 22:25:00.806059 1 server.go:643] Version: v1.21.3
I0816 22:25:00.807251 1 config.go:315] Starting service config controller
I0816 22:25:00.807281 1 shared_informer.go:240] Waiting for caches to sync for service config
I0816 22:25:00.807307 1 config.go:224] Starting endpoint slice config controller
I0816 22:25:00.807313 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
W0816 22:25:00.812511 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0816 22:25:00.816722 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
I0816 22:25:00.907844 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0816 22:25:00.907906 1 shared_informer.go:247] Caches are synced for service config
*
* ==> kube-proxy [9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d] <==
*
* ==> kube-proxy [a8503bd796d5d979a6e1b8b5154986e8b77de391b4f091211451ea5f52808e52] <==
* I0816 22:23:49.316430 1 node.go:172] Successfully retrieved node IP: 192.168.50.226
I0816 22:23:49.316608 1 server_others.go:140] Detected node IP 192.168.50.226
W0816 22:23:49.316822 1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
W0816 22:23:49.402698 1 server_others.go:197] No iptables support for IPv6: exit status 3
I0816 22:23:49.403462 1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
I0816 22:23:49.404047 1 server_others.go:212] Using iptables Proxier.
I0816 22:23:49.407950 1 server.go:643] Version: v1.21.3
I0816 22:23:49.410864 1 config.go:315] Starting service config controller
I0816 22:23:49.413112 1 config.go:224] Starting endpoint slice config controller
I0816 22:23:49.419474 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
W0816 22:23:49.421254 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
I0816 22:23:49.413718 1 shared_informer.go:240] Waiting for caches to sync for service config
I0816 22:23:49.425958 1 shared_informer.go:247] Caches are synced for service config
W0816 22:23:49.425586 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
I0816 22:23:49.520425 1 shared_informer.go:247] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1] <==
* I0816 22:24:54.634243 1 serving.go:347] Generated self-signed cert in-memory
W0816 22:24:59.095457 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0816 22:24:59.098028 1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0816 22:24:59.098491 1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
W0816 22:24:59.098734 1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0816 22:24:59.166481 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0816 22:24:59.178395 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0816 22:24:59.177851 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0816 22:24:59.194249 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0816 22:24:59.304036 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [8710cefecdbe5d31cd44e9ae3378bc08cbc56001326a1cb38026755196cac7d1] <==
* E0816 22:23:21.172468 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0816 22:23:21.189536 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0816 22:23:21.300836 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0816 22:23:21.329219 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0816 22:23:21.448607 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0816 22:23:21.504104 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0816 22:23:21.504531 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0816 22:23:21.597849 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0816 22:23:21.612843 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0816 22:23:21.671333 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0816 22:23:21.827198 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0816 22:23:21.852843 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0816 22:23:21.867015 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0816 22:23:21.910139 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0816 22:23:23.291774 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0816 22:23:23.356078 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0816 22:23:23.452841 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0816 22:23:23.464942 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0816 22:23:23.644764 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0816 22:23:23.649142 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0816 22:23:23.710606 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0816 22:23:23.980099 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0816 22:23:24.052112 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0816 22:23:24.168543 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0816 22:23:30.043826 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a] <==
*
* ==> kubelet <==
* -- Logs begin at Mon 2021-08-16 22:22:35 UTC, end at Mon 2021-08-16 22:25:16 UTC. --
Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.514985 4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.616076 4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.718006 4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.819104 4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.919357 4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:59.020392 4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.121233 4551 kuberuntime_manager.go:1044] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.122462 4551 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.228577 4551 kubelet_node_status.go:109] "Node was previously registered" node="pause-20210816222224-6986"
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.228853 4551 kubelet_node_status.go:74] "Successfully registered node" node="pause-20210816222224-6986"
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.536346 4551 apiserver.go:52] "Watching apiserver"
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.540959 4551 topology_manager.go:187] "Topology Admit Handler"
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.541581 4551 topology_manager.go:187] "Topology Admit Handler"
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.609734 4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c0e0899-31c1-477a-a6d4-2844091deea2-kube-proxy\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.610130 4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c0e0899-31c1-477a-a6d4-2844091deea2-xtables-lock\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.610271 4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c0e0899-31c1-477a-a6d4-2844091deea2-lib-modules\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.610503 4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2grh\" (UniqueName: \"kubernetes.io/projected/3c0e0899-31c1-477a-a6d4-2844091deea2-kube-api-access-b2grh\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.711424 4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgpd2\" (UniqueName: \"kubernetes.io/projected/5aa76749-775e-423d-bbf9-680a20a27051-kube-api-access-rgpd2\") pod \"coredns-558bd4d5db-gkxhz\" (UID: \"5aa76749-775e-423d-bbf9-680a20a27051\") "
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.712578 4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5aa76749-775e-423d-bbf9-680a20a27051-config-volume\") pod \"coredns-558bd4d5db-gkxhz\" (UID: \"5aa76749-775e-423d-bbf9-680a20a27051\") "
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.713123 4551 reconciler.go:157] "Reconciler: start to sync state"
Aug 16 22:25:00 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:00.142816 4551 scope.go:111] "RemoveContainer" containerID="9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d"
Aug 16 22:25:03 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:03.115940 4551 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
Aug 16 22:25:11 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:11.548694 4551 topology_manager.go:187] "Topology Admit Handler"
Aug 16 22:25:11 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:11.620746 4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4f138dc7-da0e-4775-b4de-b0f7d616b212-tmp\") pod \"storage-provisioner\" (UID: \"4f138dc7-da0e-4775-b4de-b0f7d616b212\") "
Aug 16 22:25:11 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:11.621027 4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7pzn\" (UniqueName: \"kubernetes.io/projected/4f138dc7-da0e-4775-b4de-b0f7d616b212-kube-api-access-n7pzn\") pod \"storage-provisioner\" (UID: \"4f138dc7-da0e-4775-b4de-b0f7d616b212\") "
*
* ==> storage-provisioner [f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a] <==
* I0816 22:25:12.920503 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0816 22:25:12.958814 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0816 22:25:12.959432 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
E0816 22:25:13.028463 1 leaderelection.go:361] Failed to update lock: Operation cannot be fulfilled on endpoints "k8s.io-minikube-hostpath": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/kube-system/k8s.io-minikube-hostpath, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 3f27bbad-30a1-4386-9d09-80525f79ada9, UID in object meta:
I0816 22:25:16.530709 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0816 22:25:16.540393 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210816222224-6986_409bd634-6095-4f9a-ab3f-09a5e699e184!
I0816 22:25:16.544131 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"32edeef2-57a3-43b1-a3d9-e7ecc2ed1a14", APIVersion:"v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210816222224-6986_409bd634-6095-4f9a-ab3f-09a5e699e184 became leader
I0816 22:25:16.647143 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210816222224-6986_409bd634-6095-4f9a-ab3f-09a5e699e184!
-- /stdout --
** stderr **
E0816 22:25:15.726460 11246 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423": Process exited with status 1
stdout:
stderr:
time="2021-08-16T22:25:15Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log\": lstat /var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log: no such file or directory"
output: "\n** stderr ** \ntime=\"2021-08-16T22:25:15Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log\\\": lstat /var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log: no such file or directory\"\n\n** /stderr **"
E0816 22:25:16.028111 11246 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612": Process exited with status 1
stdout:
stderr:
time="2021-08-16T22:25:16Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log\": lstat /var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log: no such file or directory"
output: "\n** stderr ** \ntime=\"2021-08-16T22:25:16Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log\\\": lstat /var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log: no such file or directory\"\n\n** /stderr **"
E0816 22:25:16.153609 11246 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4": Process exited with status 1
stdout:
stderr:
time="2021-08-16T22:25:16Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log\": lstat /var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log: no such file or directory"
output: "\n** stderr ** \ntime=\"2021-08-16T22:25:16Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log\\\": lstat /var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log: no such file or directory\"\n\n** /stderr **"
E0816 22:25:16.301310 11246 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d": Process exited with status 1
stdout:
stderr:
time="2021-08-16T22:25:16Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log\": lstat /var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log: no such file or directory"
output: "\n** stderr ** \ntime=\"2021-08-16T22:25:16Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log\\\": lstat /var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log: no such file or directory\"\n\n** /stderr **"
E0816 22:25:16.619092 11246 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a": Process exited with status 1
stdout:
stderr:
time="2021-08-16T22:25:16Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log\": lstat /var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log: no such file or directory"
output: "\n** stderr ** \ntime=\"2021-08-16T22:25:16Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log\\\": lstat /var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log: no such file or directory\"\n\n** /stderr **"
! unable to fetch logs for: etcd [3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423], kube-apiserver [7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612], kube-controller-manager [8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4], kube-proxy [9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d], kube-scheduler [97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a]
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816222224-6986 -n pause-20210816222224-6986
helpers_test.go:245: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p pause-20210816222224-6986 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210816222224-6986 logs -n 25: exit status 110 (2.026956303s)
-- stdout --
*
* ==> Audit <==
* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
| -p | multinode-20210816215441-6986 | multinode-20210816215441-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:08:07 UTC | Mon, 16 Aug 2021 22:11:11 UTC |
| | stop | | | | | |
| start | -p | multinode-20210816215441-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:11:11 UTC | Mon, 16 Aug 2021 22:15:19 UTC |
| | multinode-20210816215441-6986 | | | | | |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p | multinode-20210816215441-6986-m03 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:19 UTC | Mon, 16 Aug 2021 22:16:20 UTC |
| | multinode-20210816215441-6986-m03 | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | multinode-20210816215441-6986-m03 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:16:20 UTC | Mon, 16 Aug 2021 22:16:21 UTC |
| | multinode-20210816215441-6986-m03 | | | | | |
| delete | -p | multinode-20210816215441-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:16:21 UTC | Mon, 16 Aug 2021 22:16:23 UTC |
| | multinode-20210816215441-6986 | | | | | |
| start | -p | test-preload-20210816221807-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:18:07 UTC | Mon, 16 Aug 2021 22:19:45 UTC |
| | test-preload-20210816221807-6986 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.17.0 | | | | | |
| ssh | -p | test-preload-20210816221807-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:45 UTC | Mon, 16 Aug 2021 22:19:47 UTC |
| | test-preload-20210816221807-6986 | | | | | |
| | -- sudo crictl pull busybox | | | | | |
| start | -p | test-preload-20210816221807-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:48 UTC | Mon, 16 Aug 2021 22:20:39 UTC |
| | test-preload-20210816221807-6986 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --wait=true --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.17.3 | | | | | |
| ssh | -p | test-preload-20210816221807-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:39 UTC | Mon, 16 Aug 2021 22:20:39 UTC |
| | test-preload-20210816221807-6986 | | | | | |
| | -- sudo crictl image ls | | | | | |
| delete | -p | test-preload-20210816221807-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:39 UTC | Mon, 16 Aug 2021 22:20:40 UTC |
| | test-preload-20210816221807-6986 | | | | | |
| start | -p | scheduled-stop-20210816222040-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:40 UTC | Mon, 16 Aug 2021 22:21:45 UTC |
| | scheduled-stop-20210816222040-6986 | | | | | |
| | --memory=2048 --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| stop | -p | scheduled-stop-20210816222040-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:46 UTC | Mon, 16 Aug 2021 22:21:46 UTC |
| | scheduled-stop-20210816222040-6986 | | | | | |
| | --cancel-scheduled | | | | | |
| stop | -p | scheduled-stop-20210816222040-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:58 UTC | Mon, 16 Aug 2021 22:22:05 UTC |
| | scheduled-stop-20210816222040-6986 | | | | | |
| | --schedule 5s | | | | | |
| delete | -p | scheduled-stop-20210816222040-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:23 UTC | Mon, 16 Aug 2021 22:22:24 UTC |
| | scheduled-stop-20210816222040-6986 | | | | | |
| delete | -p kubenet-20210816222224-6986 | kubenet-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:22:25 UTC |
| delete | -p false-20210816222225-6986 | false-20210816222225-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:25 UTC | Mon, 16 Aug 2021 22:22:25 UTC |
| start | -p | force-systemd-env-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:24:04 UTC |
| | force-systemd-env-20210816222224-6986 | | | | | |
| | --memory=2048 --alsologtostderr | | | | | |
| | -v=5 --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| -p | force-systemd-env-20210816222224-6986 | force-systemd-env-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:04 UTC | Mon, 16 Aug 2021 22:24:04 UTC |
| | ssh cat /etc/containerd/config.toml | | | | | |
| delete | -p | force-systemd-env-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:04 UTC | Mon, 16 Aug 2021 22:24:05 UTC |
| | force-systemd-env-20210816222224-6986 | | | | | |
| start | -p pause-20210816222224-6986 | pause-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:24:28 UTC |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=all --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p | kubernetes-upgrade-20210816222225-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:25 UTC | Mon, 16 Aug 2021 22:24:48 UTC |
| | kubernetes-upgrade-20210816222225-6986 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.14.0 | | | | | |
| | --alsologtostderr -v=1 --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| stop | -p | kubernetes-upgrade-20210816222225-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:49 UTC | Mon, 16 Aug 2021 22:24:53 UTC |
| | kubernetes-upgrade-20210816222225-6986 | | | | | |
| start | -p | offline-containerd-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:25:02 UTC |
| | offline-containerd-20210816222224-6986 | | | | | |
| | --alsologtostderr -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | offline-containerd-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:02 UTC | Mon, 16 Aug 2021 22:25:03 UTC |
| | offline-containerd-20210816222224-6986 | | | | | |
| start | -p pause-20210816222224-6986 | pause-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:28 UTC | Mon, 16 Aug 2021 22:25:13 UTC |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
*
* ==> Last Start <==
* Log file created at: 2021/08/16 22:24:54
Running on machine: debian-jenkins-agent-3
Binary: Built with gc go1.16.7 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0816 22:24:54.079177 10879 out.go:298] Setting OutFile to fd 1 ...
I0816 22:24:54.079273 10879 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0816 22:24:54.079278 10879 out.go:311] Setting ErrFile to fd 2...
I0816 22:24:54.079280 10879 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0816 22:24:54.079426 10879 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
I0816 22:24:54.079721 10879 out.go:305] Setting JSON to false
I0816 22:24:54.187099 10879 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":4056,"bootTime":1629148638,"procs":185,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
I0816 22:24:54.187527 10879 start.go:121] virtualization: kvm guest
I0816 22:24:54.190315 10879 out.go:177] * [kubernetes-upgrade-20210816222225-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
I0816 22:24:54.192235 10879 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
I0816 22:24:54.190469 10879 notify.go:169] Checking for updates...
I0816 22:24:54.193922 10879 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0816 22:24:54.195578 10879 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
I0816 22:24:54.197163 10879 out.go:177] - MINIKUBE_LOCATION=12230
I0816 22:24:54.197582 10879 config.go:177] Loaded profile config "kubernetes-upgrade-20210816222225-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
I0816 22:24:54.197998 10879 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:24:54.198058 10879 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:24:54.215228 10879 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45677
I0816 22:24:54.215770 10879 main.go:130] libmachine: () Calling .GetVersion
I0816 22:24:54.216328 10879 main.go:130] libmachine: Using API Version 1
I0816 22:24:54.216350 10879 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:24:54.216734 10879 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:24:54.216908 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
I0816 22:24:54.217075 10879 driver.go:335] Setting default libvirt URI to qemu:///system
I0816 22:24:54.217475 10879 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:24:54.217512 10879 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:24:54.229224 10879 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34399
I0816 22:24:54.229593 10879 main.go:130] libmachine: () Calling .GetVersion
I0816 22:24:54.230067 10879 main.go:130] libmachine: Using API Version 1
I0816 22:24:54.230093 10879 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:24:54.230460 10879 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:24:54.230643 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
I0816 22:24:54.279869 10879 out.go:177] * Using the kvm2 driver based on existing profile
I0816 22:24:54.279899 10879 start.go:278] selected driver: kvm2
I0816 22:24:54.279906 10879 start.go:751] validating driver "kvm2" against &{Name:kubernetes-upgrade-20210816222225-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210816222225-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.116.91 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0816 22:24:54.280014 10879 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0816 22:24:54.281335 10879 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0816 22:24:54.282098 10879 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0816 22:24:54.294712 10879 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
I0816 22:24:54.295176 10879 cni.go:93] Creating CNI manager for ""
I0816 22:24:54.295202 10879 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
I0816 22:24:54.295212 10879 start_flags.go:277] config:
{Name:kubernetes-upgrade-20210816222225-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210816222225-
6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.116.91 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I0816 22:24:54.295364 10879 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0816 22:24:54.297417 10879 out.go:177] * Starting control plane node kubernetes-upgrade-20210816222225-6986 in cluster kubernetes-upgrade-20210816222225-6986
I0816 22:24:54.297445 10879 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
I0816 22:24:54.297484 10879 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
I0816 22:24:54.297505 10879 cache.go:56] Caching tarball of preloaded images
I0816 22:24:54.297634 10879 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0816 22:24:54.297656 10879 cache.go:59] Finished verifying existence of preloaded tar for v1.22.0-rc.0 on containerd
I0816 22:24:54.297784 10879 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/config.json ...
I0816 22:24:54.297977 10879 cache.go:205] Successfully downloaded all kic artifacts
I0816 22:24:54.298007 10879 start.go:313] acquiring machines lock for kubernetes-upgrade-20210816222225-6986: {Name:mk808edd60d1305a42bb85791729eff4573dbb15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0816 22:24:54.298081 10879 start.go:317] acquired machines lock for "kubernetes-upgrade-20210816222225-6986" in 55.05µs
I0816 22:24:54.298103 10879 start.go:93] Skipping create...Using existing machine configuration
I0816 22:24:54.298109 10879 fix.go:55] fixHost starting:
I0816 22:24:54.298510 10879 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:24:54.298561 10879 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:24:54.309226 10879 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33255
I0816 22:24:54.309690 10879 main.go:130] libmachine: () Calling .GetVersion
I0816 22:24:54.310211 10879 main.go:130] libmachine: Using API Version 1
I0816 22:24:54.310242 10879 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:24:54.310587 10879 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:24:54.310840 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
I0816 22:24:54.310996 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetState
I0816 22:24:54.314433 10879 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20210816222225-6986: state=Stopped err=<nil>
I0816 22:24:54.314482 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
W0816 22:24:54.314626 10879 fix.go:134] unexpected machine state, will restart: <nil>
I0816 22:24:52.760695 9171 pod_ready.go:102] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"False"
I0816 22:24:53.612575 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0816 22:24:53.632520 10732 api_server.go:70] duration metric: took 7.033030474s to wait for apiserver process to appear ...
I0816 22:24:53.632561 10732 api_server.go:86] waiting for apiserver healthz status ...
I0816 22:24:53.632570 10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
I0816 22:24:53.633109 10732 api_server.go:255] stopped: https://192.168.50.226:8443/healthz: Get "https://192.168.50.226:8443/healthz": dial tcp 192.168.50.226:8443: connect: connection refused
I0816 22:24:54.133848 10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
I0816 22:24:54.316518 10879 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-20210816222225-6986" ...
I0816 22:24:54.316550 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .Start
I0816 22:24:54.316716 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Ensuring networks are active...
I0816 22:24:54.318718 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Ensuring network default is active
I0816 22:24:54.319156 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Ensuring network mk-kubernetes-upgrade-20210816222225-6986 is active
I0816 22:24:54.319641 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Getting domain xml...
I0816 22:24:54.321602 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Creating domain...
I0816 22:24:54.783576 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Waiting to get IP...
I0816 22:24:54.784705 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:24:54.785273 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has current primary IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:24:54.785327 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Found IP for machine: 192.168.116.91
I0816 22:24:54.785348 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Reserving static IP address...
I0816 22:24:54.785810 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-20210816222225-6986", mac: "52:54:00:92:67:21", ip: "192.168.116.91"} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:23:40 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:24:54.785842 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Reserved static IP address: 192.168.116.91
I0816 22:24:54.785867 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | skip adding static IP to network mk-kubernetes-upgrade-20210816222225-6986 - found existing host DHCP lease matching {name: "kubernetes-upgrade-20210816222225-6986", mac: "52:54:00:92:67:21", ip: "192.168.116.91"}
I0816 22:24:54.785897 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | Getting to WaitForSSH function...
I0816 22:24:54.785911 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Waiting for SSH to be available...
I0816 22:24:54.791673 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:24:54.792070 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:23:40 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:24:54.792097 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:24:54.792320 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | Using SSH client type: external
I0816 22:24:54.792359 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa (-rw-------)
I0816 22:24:54.792401 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.116.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
I0816 22:24:54.792424 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | About to run SSH command:
I0816 22:24:54.792441 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | exit 0
I0816 22:24:55.186584 9171 pod_ready.go:102] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"False"
I0816 22:24:57.682612 9171 pod_ready.go:102] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"False"
I0816 22:24:59.683949 9171 pod_ready.go:102] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"False"
I0816 22:24:59.090396 10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0816 22:24:59.090431 10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0816 22:24:59.133677 10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
I0816 22:24:59.161347 10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0816 22:24:59.161378 10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0816 22:24:59.633911 10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
I0816 22:24:59.639524 10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0816 22:24:59.639548 10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0816 22:25:00.133775 10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
I0816 22:25:00.151749 10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0816 22:25:00.151784 10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0816 22:25:00.633968 10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
I0816 22:25:00.646578 10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 200:
ok
I0816 22:25:00.661937 10732 api_server.go:139] control plane version: v1.21.3
I0816 22:25:00.661961 10732 api_server.go:129] duration metric: took 7.029396002s to wait for apiserver health ...
I0816 22:25:00.661972 10732 cni.go:93] Creating CNI manager for ""
I0816 22:25:00.661979 10732 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
I0816 22:25:01.185512 9171 pod_ready.go:92] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:01.185545 9171 pod_ready.go:81] duration metric: took 23.534022707s waiting for pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace to be "Ready" ...
I0816 22:25:01.185559 9171 pod_ready.go:78] waiting up to 6m0s for pod "etcd-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:01.215463 9171 pod_ready.go:92] pod "etcd-offline-containerd-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:01.215489 9171 pod_ready.go:81] duration metric: took 29.921986ms waiting for pod "etcd-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:01.215503 9171 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:01.230267 9171 pod_ready.go:92] pod "kube-apiserver-offline-containerd-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:01.230289 9171 pod_ready.go:81] duration metric: took 14.776227ms waiting for pod "kube-apiserver-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:01.230302 9171 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:01.241691 9171 pod_ready.go:92] pod "kube-controller-manager-offline-containerd-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:01.241717 9171 pod_ready.go:81] duration metric: took 11.405045ms waiting for pod "kube-controller-manager-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:01.241733 9171 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dhhrk" in "kube-system" namespace to be "Ready" ...
I0816 22:25:01.251986 9171 pod_ready.go:92] pod "kube-proxy-dhhrk" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:01.252017 9171 pod_ready.go:81] duration metric: took 10.275945ms waiting for pod "kube-proxy-dhhrk" in "kube-system" namespace to be "Ready" ...
I0816 22:25:01.252030 9171 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:01.580001 9171 pod_ready.go:92] pod "kube-scheduler-offline-containerd-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:01.580033 9171 pod_ready.go:81] duration metric: took 327.992243ms waiting for pod "kube-scheduler-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:01.580046 9171 pod_ready.go:38] duration metric: took 36.483444375s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0816 22:25:01.580071 9171 api_server.go:50] waiting for apiserver process to appear ...
I0816 22:25:01.580124 9171 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0816 22:25:01.597074 9171 api_server.go:70] duration metric: took 36.950719971s to wait for apiserver process to appear ...
I0816 22:25:01.597104 9171 api_server.go:86] waiting for apiserver healthz status ...
I0816 22:25:01.597117 9171 api_server.go:239] Checking apiserver healthz at https://192.168.105.22:8443/healthz ...
I0816 22:25:01.604325 9171 api_server.go:265] https://192.168.105.22:8443/healthz returned 200:
ok
I0816 22:25:01.606279 9171 api_server.go:139] control plane version: v1.21.3
I0816 22:25:01.606301 9171 api_server.go:129] duration metric: took 9.189625ms to wait for apiserver health ...
I0816 22:25:01.606312 9171 system_pods.go:43] waiting for kube-system pods to appear ...
I0816 22:25:01.788694 9171 system_pods.go:59] 7 kube-system pods found
I0816 22:25:01.788767 9171 system_pods.go:61] "coredns-558bd4d5db-jrjhw" [acdb9f4c-484e-4e02-97c3-368ce130507e] Running
I0816 22:25:01.788794 9171 system_pods.go:61] "etcd-offline-containerd-20210816222224-6986" [5cab4619-a033-47c0-9009-225ece0f2892] Running
I0816 22:25:01.788801 9171 system_pods.go:61] "kube-apiserver-offline-containerd-20210816222224-6986" [ea1abce8-a6d2-4e57-81c9-97bdd5eefea4] Running
I0816 22:25:01.788808 9171 system_pods.go:61] "kube-controller-manager-offline-containerd-20210816222224-6986" [9e75aa0c-4fd9-4812-9163-c6c1a26c9f2e] Running
I0816 22:25:01.788813 9171 system_pods.go:61] "kube-proxy-dhhrk" [a48ab7f9-7dfc-47de-8aca-c172bea7ff31] Running
I0816 22:25:01.788819 9171 system_pods.go:61] "kube-scheduler-offline-containerd-20210816222224-6986" [3dd47537-37cc-49f2-a469-8ef39825ba4a] Running
I0816 22:25:01.788827 9171 system_pods.go:61] "storage-provisioner" [e6290b9f-d87d-488d-8f9e-7cbbc59d9585] Running
I0816 22:25:01.788835 9171 system_pods.go:74] duration metric: took 182.517591ms to wait for pod list to return data ...
I0816 22:25:01.788850 9171 default_sa.go:34] waiting for default service account to be created ...
I0816 22:25:01.981356 9171 default_sa.go:45] found service account: "default"
I0816 22:25:01.981387 9171 default_sa.go:55] duration metric: took 192.530827ms for default service account to be created ...
I0816 22:25:01.981399 9171 system_pods.go:116] waiting for k8s-apps to be running ...
I0816 22:25:02.190487 9171 system_pods.go:86] 7 kube-system pods found
I0816 22:25:02.190528 9171 system_pods.go:89] "coredns-558bd4d5db-jrjhw" [acdb9f4c-484e-4e02-97c3-368ce130507e] Running
I0816 22:25:02.190538 9171 system_pods.go:89] "etcd-offline-containerd-20210816222224-6986" [5cab4619-a033-47c0-9009-225ece0f2892] Running
I0816 22:25:02.190546 9171 system_pods.go:89] "kube-apiserver-offline-containerd-20210816222224-6986" [ea1abce8-a6d2-4e57-81c9-97bdd5eefea4] Running
I0816 22:25:02.190554 9171 system_pods.go:89] "kube-controller-manager-offline-containerd-20210816222224-6986" [9e75aa0c-4fd9-4812-9163-c6c1a26c9f2e] Running
I0816 22:25:02.190560 9171 system_pods.go:89] "kube-proxy-dhhrk" [a48ab7f9-7dfc-47de-8aca-c172bea7ff31] Running
I0816 22:25:02.190567 9171 system_pods.go:89] "kube-scheduler-offline-containerd-20210816222224-6986" [3dd47537-37cc-49f2-a469-8ef39825ba4a] Running
I0816 22:25:02.190573 9171 system_pods.go:89] "storage-provisioner" [e6290b9f-d87d-488d-8f9e-7cbbc59d9585] Running
I0816 22:25:02.190582 9171 system_pods.go:126] duration metric: took 209.176198ms to wait for k8s-apps to be running ...
I0816 22:25:02.190596 9171 system_svc.go:44] waiting for kubelet service to be running ....
I0816 22:25:02.190648 9171 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0816 22:25:02.207959 9171 system_svc.go:56] duration metric: took 17.354686ms WaitForService to wait for kubelet.
I0816 22:25:02.207991 9171 kubeadm.go:547] duration metric: took 37.56164237s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0816 22:25:02.208036 9171 node_conditions.go:102] verifying NodePressure condition ...
I0816 22:25:02.385401 9171 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0816 22:25:02.385432 9171 node_conditions.go:123] node cpu capacity is 2
I0816 22:25:02.385444 9171 node_conditions.go:105] duration metric: took 177.399541ms to run NodePressure ...
I0816 22:25:02.385455 9171 start.go:231] waiting for startup goroutines ...
I0816 22:25:02.438114 9171 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
I0816 22:25:02.440691 9171 out.go:177] * Done! kubectl is now configured to use "offline-containerd-20210816222224-6986" cluster and "default" namespace by default
I0816 22:25:00.663954 10732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0816 22:25:00.664005 10732 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
I0816 22:25:00.674379 10732 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0816 22:25:00.699896 10732 system_pods.go:43] waiting for kube-system pods to appear ...
I0816 22:25:00.718704 10732 system_pods.go:59] 6 kube-system pods found
I0816 22:25:00.718763 10732 system_pods.go:61] "coredns-558bd4d5db-gkxhz" [5aa76749-775e-423d-bbf9-680a20a27051] Running
I0816 22:25:00.718780 10732 system_pods.go:61] "etcd-pause-20210816222224-6986" [f621b99e-0604-4bed-8c4e-4f5741e52f7b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0816 22:25:00.718802 10732 system_pods.go:61] "kube-apiserver-pause-20210816222224-6986" [b1c46709-4b0b-4c9c-a701-d595a58214ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0816 22:25:00.718811 10732 system_pods.go:61] "kube-controller-manager-pause-20210816222224-6986" [777c035e-5f34-469a-afb5-4f8ef90ccbfb] Running
I0816 22:25:00.718819 10732 system_pods.go:61] "kube-proxy-7l59t" [3c0e0899-31c1-477a-a6d4-2844091deea2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0816 22:25:00.718830 10732 system_pods.go:61] "kube-scheduler-pause-20210816222224-6986" [6b32acf9-8108-45a6-901e-70cd125190f8] Running
I0816 22:25:00.718838 10732 system_pods.go:74] duration metric: took 18.921493ms to wait for pod list to return data ...
I0816 22:25:00.718847 10732 node_conditions.go:102] verifying NodePressure condition ...
I0816 22:25:00.723789 10732 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0816 22:25:00.723820 10732 node_conditions.go:123] node cpu capacity is 2
I0816 22:25:00.723836 10732 node_conditions.go:105] duration metric: took 4.978152ms to run NodePressure ...
I0816 22:25:00.723854 10732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0816 22:25:01.396623 10732 kubeadm.go:731] waiting for restarted kubelet to initialise ...
I0816 22:25:01.403109 10732 kubeadm.go:746] kubelet initialised
I0816 22:25:01.403139 10732 kubeadm.go:747] duration metric: took 6.492031ms waiting for restarted kubelet to initialise ...
I0816 22:25:01.403151 10732 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0816 22:25:01.409386 10732 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
I0816 22:25:03.432924 10732 pod_ready.go:102] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"False"
I0816 22:25:05.435685 10732 pod_ready.go:102] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"False"
I0816 22:25:05.951433 10732 pod_ready.go:92] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:05.951457 10732 pod_ready.go:81] duration metric: took 4.542029801s waiting for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
I0816 22:25:05.951470 10732 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:06.969870 10732 pod_ready.go:92] pod "etcd-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:06.969903 10732 pod_ready.go:81] duration metric: took 1.018424787s waiting for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:06.969918 10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:06.978963 10732 pod_ready.go:92] pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:06.978984 10732 pod_ready.go:81] duration metric: took 9.058114ms waiting for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:06.978997 10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:07.986911 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | SSH cmd err, output: <nil>:
I0816 22:25:07.987289 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetConfigRaw
I0816 22:25:07.988117 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetIP
I0816 22:25:07.993471 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:07.993933 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:25:07.993970 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:07.994335 10879 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/config.json ...
I0816 22:25:07.994547 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
I0816 22:25:07.994761 10879 machine.go:88] provisioning docker machine ...
I0816 22:25:07.994788 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
I0816 22:25:07.994976 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetMachineName
I0816 22:25:07.995114 10879 buildroot.go:166] provisioning hostname "kubernetes-upgrade-20210816222225-6986"
I0816 22:25:07.995139 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetMachineName
I0816 22:25:07.995291 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
I0816 22:25:08.000173 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.000497 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:25:08.000524 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.000680 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
I0816 22:25:08.000825 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
I0816 22:25:08.000965 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
I0816 22:25:08.001081 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
I0816 22:25:08.001235 10879 main.go:130] libmachine: Using SSH client type: native
I0816 22:25:08.001401 10879 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil> [] 0s} 192.168.116.91 22 <nil> <nil>}
I0816 22:25:08.001421 10879 main.go:130] libmachine: About to run SSH command:
sudo hostname kubernetes-upgrade-20210816222225-6986 && echo "kubernetes-upgrade-20210816222225-6986" | sudo tee /etc/hostname
I0816 22:25:08.156978 10879 main.go:130] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.157018 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
I0816 22:25:08.162417 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.162702 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:25:08.162735 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.162864 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
I0816 22:25:08.163064 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
I0816 22:25:08.163277 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
I0816 22:25:08.163406 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
I0816 22:25:08.163558 10879 main.go:130] libmachine: Using SSH client type: native
I0816 22:25:08.163733 10879 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil> [] 0s} 192.168.116.91 22 <nil> <nil>}
I0816 22:25:08.163761 10879 main.go:130] libmachine: About to run SSH command:
if ! grep -xq '.*\skubernetes-upgrade-20210816222225-6986' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20210816222225-6986/g' /etc/hosts;
else
echo '127.0.1.1 kubernetes-upgrade-20210816222225-6986' | sudo tee -a /etc/hosts;
fi
fi
I0816 22:25:08.307005 10879 main.go:130] libmachine: SSH cmd err, output: <nil>:
I0816 22:25:08.307035 10879 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
I0816 22:25:08.307053 10879 buildroot.go:174] setting up certificates
I0816 22:25:08.307064 10879 provision.go:83] configureAuth start
I0816 22:25:08.307075 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetMachineName
I0816 22:25:08.307332 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetIP
I0816 22:25:08.313331 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.313697 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:25:08.313729 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.313896 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
I0816 22:25:08.318531 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.318844 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:25:08.318878 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.318990 10879 provision.go:138] copyHostCerts
I0816 22:25:08.319059 10879 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
I0816 22:25:08.319073 10879 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
I0816 22:25:08.319128 10879 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
I0816 22:25:08.319254 10879 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
I0816 22:25:08.319268 10879 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
I0816 22:25:08.319294 10879 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
I0816 22:25:08.319359 10879 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
I0816 22:25:08.319368 10879 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
I0816 22:25:08.319397 10879 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
I0816 22:25:08.319465 10879 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20210816222225-6986 san=[192.168.116.91 192.168.116.91 localhost 127.0.0.1 minikube kubernetes-upgrade-20210816222225-6986]
I0816 22:25:08.473458 10879 provision.go:172] copyRemoteCerts
I0816 22:25:08.473513 10879 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0816 22:25:08.473535 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
I0816 22:25:08.478720 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.479123 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:25:08.479157 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.479301 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
I0816 22:25:08.479517 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
I0816 22:25:08.479669 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
I0816 22:25:08.479802 10879 sshutil.go:53] new ssh client: &{IP:192.168.116.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa Username:docker}
I0816 22:25:08.575404 10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0816 22:25:08.593200 10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
I0816 22:25:08.611874 10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0816 22:25:08.631651 10879 provision.go:86] duration metric: configureAuth took 324.57656ms
I0816 22:25:08.631679 10879 buildroot.go:189] setting minikube options for container-runtime
I0816 22:25:08.631847 10879 config.go:177] Loaded profile config "kubernetes-upgrade-20210816222225-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
I0816 22:25:08.631862 10879 machine.go:91] provisioned docker machine in 637.081285ms
I0816 22:25:08.631877 10879 start.go:267] post-start starting for "kubernetes-upgrade-20210816222225-6986" (driver="kvm2")
I0816 22:25:08.631885 10879 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0816 22:25:08.631905 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
I0816 22:25:08.632222 10879 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0816 22:25:08.632262 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
I0816 22:25:08.638223 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.638599 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:25:08.638628 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.638804 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
I0816 22:25:08.639025 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
I0816 22:25:08.639186 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
I0816 22:25:08.639324 10879 sshutil.go:53] new ssh client: &{IP:192.168.116.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa Username:docker}
I0816 22:25:08.731490 10879 ssh_runner.go:149] Run: cat /etc/os-release
I0816 22:25:08.736384 10879 info.go:137] Remote host: Buildroot 2020.02.12
I0816 22:25:08.736415 10879 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
I0816 22:25:08.736479 10879 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
I0816 22:25:08.736640 10879 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
I0816 22:25:08.736796 10879 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
I0816 22:25:08.744563 10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
I0816 22:25:08.762219 10879 start.go:270] post-start completed in 130.327769ms
I0816 22:25:08.762269 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
I0816 22:25:08.762532 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
I0816 22:25:08.768066 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.768447 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:25:08.768479 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.768580 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
I0816 22:25:08.768764 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
I0816 22:25:08.768937 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
I0816 22:25:08.769097 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
I0816 22:25:08.769278 10879 main.go:130] libmachine: Using SSH client type: native
I0816 22:25:08.769412 10879 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil> [] 0s} 192.168.116.91 22 <nil> <nil>}
I0816 22:25:08.769423 10879 main.go:130] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0816 22:25:08.908369 10879 main.go:130] libmachine: SSH cmd err, output: <nil>: 1629152708.857933809
I0816 22:25:08.908397 10879 fix.go:212] guest clock: 1629152708.857933809
I0816 22:25:08.908407 10879 fix.go:225] Guest: 2021-08-16 22:25:08.857933809 +0000 UTC Remote: 2021-08-16 22:25:08.762514681 +0000 UTC m=+14.743694760 (delta=95.419128ms)
I0816 22:25:08.908465 10879 fix.go:196] guest clock delta is within tolerance: 95.419128ms
I0816 22:25:08.908473 10879 fix.go:57] fixHost completed within 14.610364111s
I0816 22:25:08.908483 10879 start.go:80] releasing machines lock for "kubernetes-upgrade-20210816222225-6986", held for 14.610387547s
I0816 22:25:08.908527 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
I0816 22:25:08.908801 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetIP
I0816 22:25:08.914888 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.915258 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:25:08.915290 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.915507 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
I0816 22:25:08.915732 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
I0816 22:25:08.916309 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
I0816 22:25:08.916592 10879 ssh_runner.go:149] Run: systemctl --version
I0816 22:25:08.916617 10879 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0816 22:25:08.916626 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
I0816 22:25:08.916658 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
I0816 22:25:08.923331 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.923688 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:25:08.923714 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.923808 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
I0816 22:25:08.923961 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
I0816 22:25:08.924114 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
I0816 22:25:08.924243 10879 sshutil.go:53] new ssh client: &{IP:192.168.116.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa Username:docker}
I0816 22:25:08.924528 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.924867 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
I0816 22:25:08.924898 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
I0816 22:25:08.925049 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
I0816 22:25:08.925209 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
I0816 22:25:08.925407 10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
I0816 22:25:08.925534 10879 sshutil.go:53] new ssh client: &{IP:192.168.116.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa Username:docker}
I0816 22:25:09.022865 10879 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
I0816 22:25:09.023038 10879 ssh_runner.go:149] Run: sudo crictl images --output json
I0816 22:25:09.000201 10732 pod_ready.go:102] pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"False"
I0816 22:25:10.499577 10732 pod_ready.go:92] pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:10.499613 10732 pod_ready.go:81] duration metric: took 3.520603411s waiting for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.499631 10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.508715 10732 pod_ready.go:92] pod "kube-proxy-7l59t" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:10.508738 10732 pod_ready.go:81] duration metric: took 9.098529ms waiting for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.508749 10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.514516 10732 pod_ready.go:92] pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:10.514536 10732 pod_ready.go:81] duration metric: took 5.779042ms waiting for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.514546 10732 pod_ready.go:38] duration metric: took 9.111379533s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0816 22:25:10.514567 10732 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0816 22:25:10.530219 10732 ops.go:34] apiserver oom_adj: -16
I0816 22:25:10.530242 10732 kubeadm.go:604] restartCluster took 31.19958524s
I0816 22:25:10.530251 10732 kubeadm.go:392] StartCluster complete in 31.557512009s
I0816 22:25:10.530271 10732 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0816 22:25:10.530404 10732 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
I0816 22:25:10.531238 10732 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0816 22:25:10.532000 10732 kapi.go:59] client config for pause-20210816222224-6986: &rest.Config{Host:"https://192.168.50.226:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0816 22:25:10.647656 10732 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210816222224-6986" rescaled to 1
I0816 22:25:10.647728 10732 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.50.226 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
I0816 22:25:10.647757 10732 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0816 22:25:10.647794 10732 addons.go:342] enableAddons start: toEnable=map[], additional=[]
I0816 22:25:10.649327 10732 out.go:177] * Verifying Kubernetes components...
I0816 22:25:10.649398 10732 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0816 22:25:10.647852 10732 addons.go:59] Setting storage-provisioner=true in profile "pause-20210816222224-6986"
I0816 22:25:10.647862 10732 addons.go:59] Setting default-storageclass=true in profile "pause-20210816222224-6986"
I0816 22:25:10.647991 10732 config.go:177] Loaded profile config "pause-20210816222224-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
I0816 22:25:10.649480 10732 addons.go:135] Setting addon storage-provisioner=true in "pause-20210816222224-6986"
W0816 22:25:10.649500 10732 addons.go:147] addon storage-provisioner should already be in state true
I0816 22:25:10.649516 10732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210816222224-6986"
I0816 22:25:10.649532 10732 host.go:66] Checking if "pause-20210816222224-6986" exists ...
I0816 22:25:10.650748 10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:25:10.650827 10732 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:25:10.653189 10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:25:10.653249 10732 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:25:10.664888 10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45461
I0816 22:25:10.665365 10732 main.go:130] libmachine: () Calling .GetVersion
I0816 22:25:10.665893 10732 main.go:130] libmachine: Using API Version 1
I0816 22:25:10.665915 10732 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:25:10.666315 10732 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:25:10.666493 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
I0816 22:25:10.667827 10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34733
I0816 22:25:10.668293 10732 main.go:130] libmachine: () Calling .GetVersion
I0816 22:25:10.668762 10732 main.go:130] libmachine: Using API Version 1
I0816 22:25:10.668782 10732 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:25:10.669202 10732 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:25:10.669761 10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:25:10.669802 10732 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:25:10.670861 10732 kapi.go:59] client config for pause-20210816222224-6986: &rest.Config{Host:"https://192.168.50.226:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0816 22:25:10.676486 10732 addons.go:135] Setting addon default-storageclass=true in "pause-20210816222224-6986"
W0816 22:25:10.676510 10732 addons.go:147] addon default-storageclass should already be in state true
I0816 22:25:10.676539 10732 host.go:66] Checking if "pause-20210816222224-6986" exists ...
I0816 22:25:10.676985 10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:25:10.677031 10732 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:25:10.682317 10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39313
I0816 22:25:10.682805 10732 main.go:130] libmachine: () Calling .GetVersion
I0816 22:25:10.683360 10732 main.go:130] libmachine: Using API Version 1
I0816 22:25:10.683382 10732 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:25:10.683737 10732 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:25:10.683924 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
I0816 22:25:10.687519 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
I0816 22:25:10.693597 10732 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0816 22:25:10.693708 10732 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0816 22:25:10.693722 10732 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0816 22:25:10.693742 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
I0816 22:25:10.692712 10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45043
I0816 22:25:10.694563 10732 main.go:130] libmachine: () Calling .GetVersion
I0816 22:25:10.695082 10732 main.go:130] libmachine: Using API Version 1
I0816 22:25:10.695103 10732 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:25:10.695455 10732 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:25:10.696063 10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0816 22:25:10.696115 10732 main.go:130] libmachine: Launching plugin server for driver kvm2
I0816 22:25:10.700367 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:25:10.700792 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
I0816 22:25:10.700813 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:25:10.701111 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
I0816 22:25:10.701350 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
I0816 22:25:10.701537 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
I0816 22:25:10.701730 10732 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
I0816 22:25:10.709887 10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33339
I0816 22:25:10.710304 10732 main.go:130] libmachine: () Calling .GetVersion
I0816 22:25:10.710912 10732 main.go:130] libmachine: Using API Version 1
I0816 22:25:10.710938 10732 main.go:130] libmachine: () Calling .SetConfigRaw
I0816 22:25:10.711336 10732 main.go:130] libmachine: () Calling .GetMachineName
I0816 22:25:10.711547 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
I0816 22:25:10.714430 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
I0816 22:25:10.714683 10732 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
I0816 22:25:10.714702 10732 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0816 22:25:10.714720 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
I0816 22:25:10.720808 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:25:10.721319 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
I0816 22:25:10.721342 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
I0816 22:25:10.721485 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
I0816 22:25:10.721643 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
I0816 22:25:10.721769 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
I0816 22:25:10.721919 10732 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
I0816 22:25:10.832212 10732 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0816 22:25:10.862755 10732 node_ready.go:35] waiting up to 6m0s for node "pause-20210816222224-6986" to be "Ready" ...
I0816 22:25:10.863120 10732 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0816 22:25:10.867110 10732 node_ready.go:49] node "pause-20210816222224-6986" has status "Ready":"True"
I0816 22:25:10.867130 10732 node_ready.go:38] duration metric: took 4.344058ms waiting for node "pause-20210816222224-6986" to be "Ready" ...
I0816 22:25:10.867143 10732 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0816 22:25:10.883113 10732 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.892065 10732 pod_ready.go:92] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:10.892084 10732 pod_ready.go:81] duration metric: took 8.944517ms waiting for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.892096 10732 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:10.895462 10732 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0816 22:25:11.127716 10732 pod_ready.go:92] pod "etcd-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:11.127749 10732 pod_ready.go:81] duration metric: took 235.644563ms waiting for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:11.127765 10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:11.536655 10732 pod_ready.go:92] pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:11.536676 10732 pod_ready.go:81] duration metric: took 408.901449ms waiting for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:11.536690 10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:11.539596 10732 main.go:130] libmachine: Making call to close driver server
I0816 22:25:11.539618 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
I0816 22:25:11.539697 10732 main.go:130] libmachine: Making call to close driver server
I0816 22:25:11.539725 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
I0816 22:25:11.540009 10732 main.go:130] libmachine: Successfully made call to close driver server
I0816 22:25:11.540024 10732 main.go:130] libmachine: Making call to close connection to plugin binary
I0816 22:25:11.540041 10732 main.go:130] libmachine: Making call to close driver server
I0816 22:25:11.540041 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | Closing plugin on server side
I0816 22:25:11.540051 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
I0816 22:25:11.540067 10732 main.go:130] libmachine: Successfully made call to close driver server
I0816 22:25:11.540075 10732 main.go:130] libmachine: Making call to close connection to plugin binary
I0816 22:25:11.540083 10732 main.go:130] libmachine: Making call to close driver server
I0816 22:25:11.540092 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
I0816 22:25:11.540126 10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | Closing plugin on server side
I0816 22:25:11.540298 10732 main.go:130] libmachine: Successfully made call to close driver server
I0816 22:25:11.540310 10732 main.go:130] libmachine: Making call to close connection to plugin binary
I0816 22:25:11.540320 10732 main.go:130] libmachine: Making call to close driver server
I0816 22:25:11.540329 10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
I0816 22:25:11.540417 10732 main.go:130] libmachine: Successfully made call to close driver server
I0816 22:25:11.540429 10732 main.go:130] libmachine: Making call to close connection to plugin binary
I0816 22:25:11.540490 10732 main.go:130] libmachine: Successfully made call to close driver server
I0816 22:25:11.540502 10732 main.go:130] libmachine: Making call to close connection to plugin binary
I0816 22:25:11.542638 10732 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0816 22:25:11.542662 10732 addons.go:344] enableAddons completed in 894.875902ms
I0816 22:25:11.931820 10732 pod_ready.go:92] pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:11.931845 10732 pod_ready.go:81] duration metric: took 395.147421ms waiting for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:11.931860 10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
I0816 22:25:12.329464 10732 pod_ready.go:92] pod "kube-proxy-7l59t" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:12.329493 10732 pod_ready.go:81] duration metric: took 397.623774ms waiting for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
I0816 22:25:12.329507 10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:12.734335 10732 pod_ready.go:92] pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
I0816 22:25:12.734360 10732 pod_ready.go:81] duration metric: took 404.844565ms waiting for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
I0816 22:25:12.734374 10732 pod_ready.go:38] duration metric: took 1.867218741s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0816 22:25:12.734394 10732 api_server.go:50] waiting for apiserver process to appear ...
I0816 22:25:12.734439 10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0816 22:25:12.754510 10732 api_server.go:70] duration metric: took 2.106745047s to wait for apiserver process to appear ...
I0816 22:25:12.754540 10732 api_server.go:86] waiting for apiserver healthz status ...
I0816 22:25:12.754553 10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
I0816 22:25:12.792067 10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 200:
ok
I0816 22:25:12.794542 10732 api_server.go:139] control plane version: v1.21.3
I0816 22:25:12.794565 10732 api_server.go:129] duration metric: took 40.01886ms to wait for apiserver health ...
I0816 22:25:12.794577 10732 system_pods.go:43] waiting for kube-system pods to appear ...
I0816 22:25:12.941013 10732 system_pods.go:59] 7 kube-system pods found
I0816 22:25:12.941048 10732 system_pods.go:61] "coredns-558bd4d5db-gkxhz" [5aa76749-775e-423d-bbf9-680a20a27051] Running
I0816 22:25:12.941053 10732 system_pods.go:61] "etcd-pause-20210816222224-6986" [f621b99e-0604-4bed-8c4e-4f5741e52f7b] Running
I0816 22:25:12.941057 10732 system_pods.go:61] "kube-apiserver-pause-20210816222224-6986" [b1c46709-4b0b-4c9c-a701-d595a58214ba] Running
I0816 22:25:12.941102 10732 system_pods.go:61] "kube-controller-manager-pause-20210816222224-6986" [777c035e-5f34-469a-afb5-4f8ef90ccbfb] Running
I0816 22:25:12.941116 10732 system_pods.go:61] "kube-proxy-7l59t" [3c0e0899-31c1-477a-a6d4-2844091deea2] Running
I0816 22:25:12.941122 10732 system_pods.go:61] "kube-scheduler-pause-20210816222224-6986" [6b32acf9-8108-45a6-901e-70cd125190f8] Running
I0816 22:25:12.941136 10732 system_pods.go:61] "storage-provisioner" [4f138dc7-da0e-4775-b4de-b0f7d616b212] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0816 22:25:12.941158 10732 system_pods.go:74] duration metric: took 146.575596ms to wait for pod list to return data ...
I0816 22:25:12.941176 10732 default_sa.go:34] waiting for default service account to be created ...
I0816 22:25:13.132349 10732 default_sa.go:45] found service account: "default"
I0816 22:25:13.132381 10732 default_sa.go:55] duration metric: took 191.195172ms for default service account to be created ...
I0816 22:25:13.132394 10732 system_pods.go:116] waiting for k8s-apps to be running ...
I0816 22:25:13.340094 10732 system_pods.go:86] 7 kube-system pods found
I0816 22:25:13.340135 10732 system_pods.go:89] "coredns-558bd4d5db-gkxhz" [5aa76749-775e-423d-bbf9-680a20a27051] Running
I0816 22:25:13.340146 10732 system_pods.go:89] "etcd-pause-20210816222224-6986" [f621b99e-0604-4bed-8c4e-4f5741e52f7b] Running
I0816 22:25:13.340155 10732 system_pods.go:89] "kube-apiserver-pause-20210816222224-6986" [b1c46709-4b0b-4c9c-a701-d595a58214ba] Running
I0816 22:25:13.340163 10732 system_pods.go:89] "kube-controller-manager-pause-20210816222224-6986" [777c035e-5f34-469a-afb5-4f8ef90ccbfb] Running
I0816 22:25:13.340172 10732 system_pods.go:89] "kube-proxy-7l59t" [3c0e0899-31c1-477a-a6d4-2844091deea2] Running
I0816 22:25:13.340184 10732 system_pods.go:89] "kube-scheduler-pause-20210816222224-6986" [6b32acf9-8108-45a6-901e-70cd125190f8] Running
I0816 22:25:13.340196 10732 system_pods.go:89] "storage-provisioner" [4f138dc7-da0e-4775-b4de-b0f7d616b212] Running
I0816 22:25:13.340210 10732 system_pods.go:126] duration metric: took 207.809217ms to wait for k8s-apps to be running ...
I0816 22:25:13.340225 10732 system_svc.go:44] waiting for kubelet service to be running ....
I0816 22:25:13.340279 10732 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0816 22:25:13.358716 10732 system_svc.go:56] duration metric: took 18.47804ms WaitForService to wait for kubelet.
I0816 22:25:13.358752 10732 kubeadm.go:547] duration metric: took 2.710991068s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0816 22:25:13.358785 10732 node_conditions.go:102] verifying NodePressure condition ...
I0816 22:25:13.536797 10732 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0816 22:25:13.536830 10732 node_conditions.go:123] node cpu capacity is 2
I0816 22:25:13.536848 10732 node_conditions.go:105] duration metric: took 178.056493ms to run NodePressure ...
I0816 22:25:13.536863 10732 start.go:231] waiting for startup goroutines ...
I0816 22:25:13.602415 10732 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
I0816 22:25:13.604425 10732 out.go:177] * Done! kubectl is now configured to use "pause-20210816222224-6986" cluster and "default" namespace by default
I0816 22:25:13.045168 10879 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.02209826s)
I0816 22:25:13.045290 10879 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
I0816 22:25:13.045383 10879 ssh_runner.go:149] Run: which lz4
I0816 22:25:13.050542 10879 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0816 22:25:13.055627 10879 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I0816 22:25:13.055661 10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (945588089 bytes)
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
f04c445038901 6e38f40d628db 5 seconds ago Running storage-provisioner 0 0e10d9862204b
e70dd80568a0a 296a6d5035e2d 16 seconds ago Running coredns 1 c649190b7c07d
2585772c8a261 adb2816ea823a 17 seconds ago Running kube-proxy 2 d73b4cafe25f0
53780b2759956 3d174f00aa39e 24 seconds ago Running kube-apiserver 2 fb9f201b2c2e1
76fef890edebe 6be0dc1302e30 24 seconds ago Running kube-scheduler 2 1718d2a0276ce
69a7fab4848c4 0369cf4303ffd 24 seconds ago Running etcd 2 3b9459ff3a0d8
825e79d62718c bc2bb319a7038 25 seconds ago Running kube-controller-manager 2 feab707eb735a
7626b842ef886 3d174f00aa39e 25 seconds ago Created kube-apiserver 1 fb9f201b2c2e1
9d9f34b35e099 adb2816ea823a 25 seconds ago Created kube-proxy 1 d73b4cafe25f0
97c4cc3614116 6be0dc1302e30 25 seconds ago Created kube-scheduler 1 1718d2a0276ce
3644e35e40a2f 0369cf4303ffd 25 seconds ago Created etcd 1 3b9459ff3a0d8
8c5f2c007cff4 bc2bb319a7038 29 seconds ago Created kube-controller-manager 1 feab707eb735a
28c7161cd49a4 296a6d5035e2d About a minute ago Exited coredns 0 05c2427240818
a8503bd796d5d adb2816ea823a About a minute ago Exited kube-proxy 0 a86c3b6ee3a70
124fa393359f7 0369cf4303ffd 2 minutes ago Exited etcd 0 94a493a65b593
8710cefecdbe5 6be0dc1302e30 2 minutes ago Exited kube-scheduler 0 982e66890a90d
38dc61b214a9c 3d174f00aa39e 2 minutes ago Exited kube-apiserver 0 630ed9d4644e9
*
* ==> containerd <==
* -- Logs begin at Mon 2021-08-16 22:22:35 UTC, end at Mon 2021-08-16 22:25:17 UTC. --
Aug 16 22:24:53 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:24:53.606374155Z" level=info msg="StartContainer for \"69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549\" returns successfully"
Aug 16 22:24:53 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:24:53.687984942Z" level=info msg="StartContainer for \"76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1\" returns successfully"
Aug 16 22:24:59 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:24:59.121993631Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.146522428Z" level=info msg="CreateContainer within sandbox \"d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:2,}"
Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.231610260Z" level=info msg="CreateContainer within sandbox \"d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8\" for &ContainerMetadata{Name:kube-proxy,Attempt:2,} returns container id \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\""
Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.233198734Z" level=info msg="StartContainer for \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\""
Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.443953465Z" level=info msg="StopPodSandbox for \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\""
Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.444769081Z" level=info msg="Container to stop \"28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.461877463Z" level=info msg="StartContainer for \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\" returns successfully"
Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536079353Z" level=info msg="TearDown network for sandbox \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\" successfully"
Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536191167Z" level=info msg="StopPodSandbox for \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\" returns successfully"
Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536962082Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-gkxhz,Uid:5aa76749-775e-423d-bbf9-680a20a27051,Namespace:kube-system,Attempt:1,}"
Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.776744568Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb pid=5007
Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.290447333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-gkxhz,Uid:5aa76749-775e-423d-bbf9-680a20a27051,Namespace:kube-system,Attempt:1,} returns sandbox id \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\""
Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.300600113Z" level=info msg="CreateContainer within sandbox \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.389478760Z" level=info msg="CreateContainer within sandbox \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\""
Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.397162604Z" level=info msg="StartContainer for \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\""
Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.594046909Z" level=info msg="StartContainer for \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\" returns successfully"
Aug 16 22:25:11 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:11.852957632Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:4f138dc7-da0e-4775-b4de-b0f7d616b212,Namespace:kube-system,Attempt:0,}"
Aug 16 22:25:11 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:11.903771908Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c pid=5174
Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.439549893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:4f138dc7-da0e-4775-b4de-b0f7d616b212,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\""
Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.451930506Z" level=info msg="CreateContainer within sandbox \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.521875733Z" level=info msg="CreateContainer within sandbox \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.523292924Z" level=info msg="StartContainer for \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.851898064Z" level=info msg="StartContainer for \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\" returns successfully"
*
* ==> coredns [28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.8.0
linux/amd64, go1.15.3, 054c9ae
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] Reloading
[INFO] plugin/health: Going into lameduck mode for 5s
I0816 22:24:19.170128 1 trace.go:205] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.168) (total time: 30001ms):
Trace[2019727887]: [30.001909435s] [30.001909435s] END
E0816 22:24:19.170279 1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0816 22:24:19.171047 1 trace.go:205] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.166) (total time: 30004ms):
Trace[939984059]: [30.004733433s] [30.004733433s] END
E0816 22:24:19.171149 1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0816 22:24:19.171258 1 trace.go:205] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.166) (total time: 30004ms):
Trace[911902081]: [30.004945736s] [30.004945736s] END
E0816 22:24:19.171265 1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
*
* ==> coredns [e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = 7ae91e86dd75dee9ae501cb58003198b
CoreDNS-1.8.0
linux/amd64, go1.15.3, 054c9ae
*
* ==> describe nodes <==
* Name: pause-20210816222224-6986
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-20210816222224-6986
kubernetes.io/os=linux
minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
minikube.k8s.io/name=pause-20210816222224-6986
minikube.k8s.io/updated_at=2021_08_16T22_23_26_0700
minikube.k8s.io/version=v1.22.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 16 Aug 2021 22:23:23 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-20210816222224-6986
AcquireTime: <unset>
RenewTime: Mon, 16 Aug 2021 22:25:09 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 16 Aug 2021 22:24:59 +0000 Mon, 16 Aug 2021 22:23:18 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 16 Aug 2021 22:24:59 +0000 Mon, 16 Aug 2021 22:23:18 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 16 Aug 2021 22:24:59 +0000 Mon, 16 Aug 2021 22:23:18 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 16 Aug 2021 22:24:59 +0000 Mon, 16 Aug 2021 22:23:42 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.50.226
Hostname: pause-20210816222224-6986
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2033044Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2033044Ki
pods: 110
System Info:
Machine ID: 940ad300f94c41e2a0b0cde81be11541
System UUID: 940ad300-f94c-41e2-a0b0-cde81be11541
Boot ID: ea001a4b-e783-4f93-b7d3-bb910eb45d3c
Kernel Version: 4.19.182
OS Image: Buildroot 2020.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.4.9
Kubelet Version: v1.21.3
Kube-Proxy Version: v1.21.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-558bd4d5db-gkxhz 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 92s
kube-system etcd-pause-20210816222224-6986 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 114s
kube-system kube-apiserver-pause-20210816222224-6986 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 112s
kube-system kube-controller-manager-pause-20210816222224-6986 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 106s
kube-system kube-proxy-7l59t 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 92s
kube-system kube-scheduler-pause-20210816222224-6986 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 106s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 2m6s (x6 over 2m7s) kubelet Node pause-20210816222224-6986 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m6s (x5 over 2m7s) kubelet Node pause-20210816222224-6986 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m6s (x5 over 2m7s) kubelet Node pause-20210816222224-6986 status is now: NodeHasSufficientPID
Normal Starting 106s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 106s kubelet Node pause-20210816222224-6986 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 106s kubelet Node pause-20210816222224-6986 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 106s kubelet Node pause-20210816222224-6986 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 106s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 96s kubelet Node pause-20210816222224-6986 status is now: NodeReady
Normal Starting 89s kube-proxy Starting kube-proxy.
Normal Starting 27s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 27s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 26s (x8 over 27s) kubelet Node pause-20210816222224-6986 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 26s (x8 over 27s) kubelet Node pause-20210816222224-6986 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 26s (x7 over 27s) kubelet Node pause-20210816222224-6986 status is now: NodeHasSufficientPID
Normal Starting 18s kube-proxy Starting kube-proxy.
*
* ==> dmesg <==
* [ +3.181431] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
[ +0.036573] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +0.985023] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1731 comm=systemd-network
[ +1.088197] vboxguest: loading out-of-tree module taints kernel.
[ +0.006251] vboxguest: PCI device not found, probably running on physical hardware.
[ +1.889854] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +16.286436] systemd-fstab-generator[2098]: Ignoring "noauto" for root device
[ +0.258185] systemd-fstab-generator[2128]: Ignoring "noauto" for root device
[ +0.135377] systemd-fstab-generator[2143]: Ignoring "noauto" for root device
[ +0.180446] systemd-fstab-generator[2173]: Ignoring "noauto" for root device
[Aug16 22:23] systemd-fstab-generator[2381]: Ignoring "noauto" for root device
[ +20.504547] systemd-fstab-generator[2808]: Ignoring "noauto" for root device
[ +20.717915] kauditd_printk_skb: 38 callbacks suppressed
[ +5.551219] kauditd_printk_skb: 104 callbacks suppressed
[Aug16 22:24] kauditd_printk_skb: 2 callbacks suppressed
[ +6.792051] systemd-fstab-generator[3754]: Ignoring "noauto" for root device
[ +0.176916] systemd-fstab-generator[3767]: Ignoring "noauto" for root device
[ +0.230657] systemd-fstab-generator[3792]: Ignoring "noauto" for root device
[ +4.083098] kauditd_printk_skb: 2 callbacks suppressed
[ +3.840195] NFSD: Unable to end grace period: -110
[ +4.324119] systemd-fstab-generator[4543]: Ignoring "noauto" for root device
[ +6.680726] kauditd_printk_skb: 29 callbacks suppressed
[Aug16 22:25] kauditd_printk_skb: 14 callbacks suppressed
[ +12.641213] kauditd_printk_skb: 23 callbacks suppressed
*
* ==> etcd [124fa393359f758ea47161b345d2cab4ce486d4473a4caad483449464d44315f] <==
* 2021-08-16 22:23:41.064197 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210816222224-6986\" " with result "range_response_count:1 size:5052" took too long (6.421187445s) to execute
2021-08-16 22:23:41.065847 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:341" took too long (6.446897155s) to execute
2021-08-16 22:23:41.066285 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/node-controller\" " with result "range_response_count:1 size:242" took too long (5.09674902s) to execute
2021-08-16 22:23:41.068005 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (6.28196539s) to execute
2021-08-16 22:23:41.068259 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (763.710719ms) to execute
2021-08-16 22:23:41.880435 W | etcdserver: read-only range request "key:\"/registry/masterleases/192.168.50.226\" " with result "range_response_count:0 size:5" took too long (776.335267ms) to execute
2021-08-16 22:23:41.881080 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (597.366064ms) to execute
2021-08-16 22:23:41.882354 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:4569" took too long (763.841142ms) to execute
2021-08-16 22:23:41.883287 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (621.677263ms) to execute
2021-08-16 22:23:41.884722 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (481.499599ms) to execute
2021-08-16 22:23:41.885189 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-pause-20210816222224-6986\" " with result "range_response_count:1 size:5421" took too long (772.180278ms) to execute
2021-08-16 22:23:42.453217 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (290.061418ms) to execute
2021-08-16 22:23:42.455427 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (285.893643ms) to execute
2021-08-16 22:23:42.456943 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-pause-20210816222224-6986\" " with result "range_response_count:1 size:6314" took too long (153.946258ms) to execute
2021-08-16 22:23:42.458024 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (177.825431ms) to execute
2021-08-16 22:23:44.267832 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-16 22:23:54.092150 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (701.802797ms) to execute
2021-08-16 22:23:54.093518 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-ctgf5\" " with result "range_response_count:1 size:4473" took too long (1.090386256s) to execute
2021-08-16 22:23:54.267392 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-16 22:23:57.768234 W | etcdserver: request "header:<ID:4263355585347158035 > lease_revoke:<id:3b2a7b510fcb7e67>" with result "size:29" took too long (771.90226ms) to execute
2021-08-16 22:23:57.768903 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (374.444829ms) to execute
2021-08-16 22:23:57.769379 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-ctgf5\" " with result "range_response_count:1 size:4473" took too long (765.115046ms) to execute
2021-08-16 22:24:04.267548 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-16 22:24:14.267958 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-16 22:24:24.268321 I | etcdserver/api/etcdhttp: /health OK (status code 200)
*
* ==> etcd [3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423] <==
*
* ==> etcd [69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549] <==
* 2021-08-16 22:24:53.773065 W | auth: simple token is not cryptographically signed
2021-08-16 22:24:53.837118 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
raft2021/08/16 22:24:53 INFO: e840193bf29c3b2a switched to configuration voters=(16735403960572853034)
2021-08-16 22:24:53.849298 I | etcdserver/membership: added member e840193bf29c3b2a [https://192.168.50.226:2380] to cluster 99b90e1bea73c730
2021-08-16 22:24:53.860198 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2021-08-16 22:24:53.864997 I | embed: listening for metrics on http://127.0.0.1:2381
2021-08-16 22:24:53.865214 I | embed: listening for peers on 192.168.50.226:2380
2021-08-16 22:24:53.868083 N | etcdserver/membership: set the initial cluster version to 3.4
2021-08-16 22:24:53.871735 I | etcdserver/api: enabled capabilities for version 3.4
raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a is starting a new election at term 2
raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a became candidate at term 3
raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a received MsgVoteResp from e840193bf29c3b2a at term 3
raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a became leader at term 3
raft2021/08/16 22:24:54 INFO: raft.node: e840193bf29c3b2a elected leader e840193bf29c3b2a at term 3
2021-08-16 22:24:54.968820 I | embed: ready to serve client requests
2021-08-16 22:24:54.969394 I | etcdserver: published {Name:pause-20210816222224-6986 ClientURLs:[https://192.168.50.226:2379]} to cluster 99b90e1bea73c730
2021-08-16 22:24:54.971284 I | embed: serving client requests on 192.168.50.226:2379
2021-08-16 22:24:54.971462 I | embed: ready to serve client requests
2021-08-16 22:24:54.973508 I | embed: serving client requests on 127.0.0.1:2379
2021-08-16 22:25:03.067902 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-gkxhz\" " with result "range_response_count:1 size:4860" took too long (140.807991ms) to execute
2021-08-16 22:25:06.747736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-16 22:25:08.138740 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-16 22:25:10.645123 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:3838" took too long (108.124514ms) to execute
2021-08-16 22:25:10.645989 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:665" took too long (107.967343ms) to execute
2021-08-16 22:25:18.137756 I | etcdserver/api/etcdhttp: /health OK (status code 200)
*
* ==> kernel <==
* 22:25:18 up 2 min, 0 users, load average: 3.35, 1.58, 0.61
Linux pause-20210816222224-6986 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2020.02.12"
*
* ==> kube-apiserver [38dc61b214a9cbd019de4ca9ab52fb6baf728336de6d715df22b027522ad8b20] <==
* I0816 22:23:41.890272 1 trace.go:205] Trace[914939944]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:23:41.117) (total time: 772ms):
Trace[914939944]: [772.29448ms] [772.29448ms] END
I0816 22:23:41.897880 1 trace.go:205] Trace[372773048]: "List" url:/api/v1/nodes,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/system:serviceaccount:kube-system:node-controller,client:192.168.50.226,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:41.117) (total time: 780ms):
Trace[372773048]: ---"Listing from storage done" 773ms (22:23:00.891)
Trace[372773048]: [780.024685ms] [780.024685ms] END
I0816 22:23:41.899245 1 trace.go:205] Trace[189474875]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-20210816222224-6986,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.50.226,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (16-Aug-2021 22:23:41.107) (total time: 791ms):
Trace[189474875]: ---"About to write a response" 791ms (22:23:00.899)
Trace[189474875]: [791.769473ms] [791.769473ms] END
I0816 22:23:41.914143 1 trace.go:205] Trace[1803257945]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (16-Aug-2021 22:23:41.101) (total time: 812ms):
Trace[1803257945]: ---"initial value restored" 795ms (22:23:00.897)
Trace[1803257945]: [812.099383ms] [812.099383ms] END
I0816 22:23:46.219827 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
I0816 22:23:46.322056 1 controller.go:611] quota admission added evaluator for: replicasets.apps
I0816 22:23:54.101003 1 trace.go:205] Trace[1429856954]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-ctgf5,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.50.1,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:53.002) (total time: 1098ms):
Trace[1429856954]: ---"About to write a response" 1098ms (22:23:00.100)
Trace[1429856954]: [1.0988209s] [1.0988209s] END
I0816 22:23:56.194218 1 client.go:360] parsed scheme: "passthrough"
I0816 22:23:56.194943 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0816 22:23:56.195388 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0816 22:23:57.770900 1 trace.go:205] Trace[2103117378]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-ctgf5,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.50.1,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:57.002) (total time: 767ms):
Trace[2103117378]: ---"About to write a response" 767ms (22:23:00.770)
Trace[2103117378]: [767.944134ms] [767.944134ms] END
I0816 22:24:32.818404 1 client.go:360] parsed scheme: "passthrough"
I0816 22:24:32.818597 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0816 22:24:32.818691 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
*
* ==> kube-apiserver [53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf] <==
* I0816 22:24:59.052878 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0816 22:24:59.052897 1 crd_finalizer.go:266] Starting CRDFinalizer
I0816 22:24:59.071128 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0816 22:24:59.071704 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0816 22:24:59.072328 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0816 22:24:59.072872 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I0816 22:24:59.173327 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I0816 22:24:59.176720 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
E0816 22:24:59.181278 1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I0816 22:24:59.206356 1 shared_informer.go:247] Caches are synced for node_authorizer
I0816 22:24:59.225165 1 cache.go:39] Caches are synced for autoregister controller
I0816 22:24:59.227741 1 apf_controller.go:299] Running API Priority and Fairness config worker
I0816 22:24:59.230223 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0816 22:24:59.244026 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0816 22:24:59.248943 1 shared_informer.go:247] Caches are synced for crd-autoregister
I0816 22:25:00.021310 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0816 22:25:00.022052 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0816 22:25:00.034218 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0816 22:25:01.108795 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0816 22:25:01.182177 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0816 22:25:01.279321 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0816 22:25:01.344553 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0816 22:25:01.382891 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0816 22:25:11.471022 1 controller.go:611] quota admission added evaluator for: endpoints
I0816 22:25:13.002505 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
*
* ==> kube-apiserver [7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612] <==
*
* ==> kube-controller-manager [825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7] <==
* I0816 22:25:12.900492 1 shared_informer.go:247] Caches are synced for GC
I0816 22:25:12.900735 1 shared_informer.go:247] Caches are synced for job
I0816 22:25:12.908539 1 shared_informer.go:247] Caches are synced for ReplicaSet
I0816 22:25:12.910182 1 shared_informer.go:247] Caches are synced for persistent volume
I0816 22:25:12.925990 1 shared_informer.go:247] Caches are synced for stateful set
I0816 22:25:12.926195 1 shared_informer.go:247] Caches are synced for HPA
I0816 22:25:12.931999 1 shared_informer.go:247] Caches are synced for attach detach
I0816 22:25:12.933971 1 shared_informer.go:247] Caches are synced for PVC protection
I0816 22:25:12.934151 1 shared_informer.go:247] Caches are synced for deployment
I0816 22:25:12.943776 1 shared_informer.go:247] Caches are synced for ephemeral
I0816 22:25:12.963727 1 shared_informer.go:247] Caches are synced for endpoint_slice
I0816 22:25:12.969209 1 shared_informer.go:247] Caches are synced for taint
I0816 22:25:12.969381 1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone:
W0816 22:25:12.969524 1 node_lifecycle_controller.go:1013] Missing timestamp for Node pause-20210816222224-6986. Assuming now as a timestamp.
I0816 22:25:12.969564 1 node_lifecycle_controller.go:1214] Controller detected that zone is now in state Normal.
I0816 22:25:12.970457 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
I0816 22:25:12.970831 1 event.go:291] "Event occurred" object="pause-20210816222224-6986" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20210816222224-6986 event: Registered Node pause-20210816222224-6986 in Controller"
I0816 22:25:12.974749 1 shared_informer.go:247] Caches are synced for endpoint
I0816 22:25:13.000548 1 shared_informer.go:247] Caches are synced for disruption
I0816 22:25:13.000739 1 disruption.go:371] Sending events to api server.
I0816 22:25:13.004608 1 shared_informer.go:247] Caches are synced for resource quota
I0816 22:25:13.016848 1 shared_informer.go:247] Caches are synced for resource quota
I0816 22:25:13.386564 1 shared_informer.go:247] Caches are synced for garbage collector
I0816 22:25:13.386597 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0816 22:25:13.440139 1 shared_informer.go:247] Caches are synced for garbage collector
*
* ==> kube-controller-manager [8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4] <==
*
* ==> kube-proxy [2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2] <==
* I0816 22:25:00.641886 1 node.go:172] Successfully retrieved node IP: 192.168.50.226
I0816 22:25:00.641938 1 server_others.go:140] Detected node IP 192.168.50.226
W0816 22:25:00.642012 1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
W0816 22:25:00.805515 1 server_others.go:197] No iptables support for IPv6: exit status 3
I0816 22:25:00.805539 1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
I0816 22:25:00.805560 1 server_others.go:212] Using iptables Proxier.
I0816 22:25:00.806059 1 server.go:643] Version: v1.21.3
I0816 22:25:00.807251 1 config.go:315] Starting service config controller
I0816 22:25:00.807281 1 shared_informer.go:240] Waiting for caches to sync for service config
I0816 22:25:00.807307 1 config.go:224] Starting endpoint slice config controller
I0816 22:25:00.807313 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
W0816 22:25:00.812511 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0816 22:25:00.816722 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
I0816 22:25:00.907844 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0816 22:25:00.907906 1 shared_informer.go:247] Caches are synced for service config
*
* ==> kube-proxy [9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d] <==
*
* ==> kube-proxy [a8503bd796d5d979a6e1b8b5154986e8b77de391b4f091211451ea5f52808e52] <==
* I0816 22:23:49.316430 1 node.go:172] Successfully retrieved node IP: 192.168.50.226
I0816 22:23:49.316608 1 server_others.go:140] Detected node IP 192.168.50.226
W0816 22:23:49.316822 1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
W0816 22:23:49.402698 1 server_others.go:197] No iptables support for IPv6: exit status 3
I0816 22:23:49.403462 1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
I0816 22:23:49.404047 1 server_others.go:212] Using iptables Proxier.
I0816 22:23:49.407950 1 server.go:643] Version: v1.21.3
I0816 22:23:49.410864 1 config.go:315] Starting service config controller
I0816 22:23:49.413112 1 config.go:224] Starting endpoint slice config controller
I0816 22:23:49.419474 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
W0816 22:23:49.421254 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
I0816 22:23:49.413718 1 shared_informer.go:240] Waiting for caches to sync for service config
I0816 22:23:49.425958 1 shared_informer.go:247] Caches are synced for service config
W0816 22:23:49.425586 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
I0816 22:23:49.520425 1 shared_informer.go:247] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1] <==
* I0816 22:24:54.634243 1 serving.go:347] Generated self-signed cert in-memory
W0816 22:24:59.095457 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0816 22:24:59.098028 1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0816 22:24:59.098491 1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
W0816 22:24:59.098734 1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0816 22:24:59.166481 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0816 22:24:59.178395 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0816 22:24:59.177851 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0816 22:24:59.194249 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0816 22:24:59.304036 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [8710cefecdbe5d31cd44e9ae3378bc08cbc56001326a1cb38026755196cac7d1] <==
* E0816 22:23:21.172468 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0816 22:23:21.189536 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0816 22:23:21.300836 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0816 22:23:21.329219 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0816 22:23:21.448607 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0816 22:23:21.504104 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0816 22:23:21.504531 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0816 22:23:21.597849 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0816 22:23:21.612843 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0816 22:23:21.671333 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0816 22:23:21.827198 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0816 22:23:21.852843 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0816 22:23:21.867015 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0816 22:23:21.910139 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0816 22:23:23.291774 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0816 22:23:23.356078 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0816 22:23:23.452841 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0816 22:23:23.464942 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0816 22:23:23.644764 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0816 22:23:23.649142 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0816 22:23:23.710606 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0816 22:23:23.980099 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0816 22:23:24.052112 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0816 22:23:24.168543 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0816 22:23:30.043826 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a] <==
*
* ==> kubelet <==
* -- Logs begin at Mon 2021-08-16 22:22:35 UTC, end at Mon 2021-08-16 22:25:19 UTC. --
Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.514985 4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.616076 4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.718006 4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.819104 4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.919357 4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:59.020392 4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.121233 4551 kuberuntime_manager.go:1044] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.122462 4551 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.228577 4551 kubelet_node_status.go:109] "Node was previously registered" node="pause-20210816222224-6986"
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.228853 4551 kubelet_node_status.go:74] "Successfully registered node" node="pause-20210816222224-6986"
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.536346 4551 apiserver.go:52] "Watching apiserver"
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.540959 4551 topology_manager.go:187] "Topology Admit Handler"
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.541581 4551 topology_manager.go:187] "Topology Admit Handler"
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.609734 4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c0e0899-31c1-477a-a6d4-2844091deea2-kube-proxy\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.610130 4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c0e0899-31c1-477a-a6d4-2844091deea2-xtables-lock\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.610271 4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c0e0899-31c1-477a-a6d4-2844091deea2-lib-modules\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.610503 4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2grh\" (UniqueName: \"kubernetes.io/projected/3c0e0899-31c1-477a-a6d4-2844091deea2-kube-api-access-b2grh\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.711424 4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgpd2\" (UniqueName: \"kubernetes.io/projected/5aa76749-775e-423d-bbf9-680a20a27051-kube-api-access-rgpd2\") pod \"coredns-558bd4d5db-gkxhz\" (UID: \"5aa76749-775e-423d-bbf9-680a20a27051\") "
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.712578 4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5aa76749-775e-423d-bbf9-680a20a27051-config-volume\") pod \"coredns-558bd4d5db-gkxhz\" (UID: \"5aa76749-775e-423d-bbf9-680a20a27051\") "
Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.713123 4551 reconciler.go:157] "Reconciler: start to sync state"
Aug 16 22:25:00 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:00.142816 4551 scope.go:111] "RemoveContainer" containerID="9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d"
Aug 16 22:25:03 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:03.115940 4551 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
Aug 16 22:25:11 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:11.548694 4551 topology_manager.go:187] "Topology Admit Handler"
Aug 16 22:25:11 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:11.620746 4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4f138dc7-da0e-4775-b4de-b0f7d616b212-tmp\") pod \"storage-provisioner\" (UID: \"4f138dc7-da0e-4775-b4de-b0f7d616b212\") "
Aug 16 22:25:11 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:11.621027 4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7pzn\" (UniqueName: \"kubernetes.io/projected/4f138dc7-da0e-4775-b4de-b0f7d616b212-kube-api-access-n7pzn\") pod \"storage-provisioner\" (UID: \"4f138dc7-da0e-4775-b4de-b0f7d616b212\") "
*
* ==> storage-provisioner [f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a] <==
* I0816 22:25:12.920503 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0816 22:25:12.958814 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0816 22:25:12.959432 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
E0816 22:25:13.028463 1 leaderelection.go:361] Failed to update lock: Operation cannot be fulfilled on endpoints "k8s.io-minikube-hostpath": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/kube-system/k8s.io-minikube-hostpath, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 3f27bbad-30a1-4386-9d09-80525f79ada9, UID in object meta:
I0816 22:25:16.530709 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0816 22:25:16.540393 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210816222224-6986_409bd634-6095-4f9a-ab3f-09a5e699e184!
I0816 22:25:16.544131 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"32edeef2-57a3-43b1-a3d9-e7ecc2ed1a14", APIVersion:"v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210816222224-6986_409bd634-6095-4f9a-ab3f-09a5e699e184 became leader
I0816 22:25:16.647143 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210816222224-6986_409bd634-6095-4f9a-ab3f-09a5e699e184!
-- /stdout --
** stderr **
E0816 22:25:18.365377 11348 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423": Process exited with status 1
stdout:
stderr:
time="2021-08-16T22:25:18Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log\": lstat /var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log: no such file or directory"
output: "\n** stderr ** \ntime=\"2021-08-16T22:25:18Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log\\\": lstat /var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log: no such file or directory\"\n\n** /stderr **"
E0816 22:25:18.618219 11348 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612": Process exited with status 1
stdout:
stderr:
time="2021-08-16T22:25:18Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log\": lstat /var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log: no such file or directory"
output: "\n** stderr ** \ntime=\"2021-08-16T22:25:18Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log\\\": lstat /var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log: no such file or directory\"\n\n** /stderr **"
E0816 22:25:18.731515 11348 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4": Process exited with status 1
stdout:
stderr:
time="2021-08-16T22:25:18Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log\": lstat /var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log: no such file or directory"
output: "\n** stderr ** \ntime=\"2021-08-16T22:25:18Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log\\\": lstat /var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log: no such file or directory\"\n\n** /stderr **"
E0816 22:25:18.825217 11348 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d": Process exited with status 1
stdout:
stderr:
time="2021-08-16T22:25:18Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log\": lstat /var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log: no such file or directory"
output: "\n** stderr ** \ntime=\"2021-08-16T22:25:18Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log\\\": lstat /var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log: no such file or directory\"\n\n** /stderr **"
E0816 22:25:19.031711 11348 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a": Process exited with status 1
stdout:
stderr:
time="2021-08-16T22:25:19Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log\": lstat /var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log: no such file or directory"
output: "\n** stderr ** \ntime=\"2021-08-16T22:25:19Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log\\\": lstat /var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log: no such file or directory\"\n\n** /stderr **"
! unable to fetch logs for: etcd [3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423], kube-apiserver [7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612], kube-controller-manager [8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4], kube-proxy [9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d], kube-scheduler [97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a]
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (50.84s)