=== RUN TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade
=== CONT TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run: /tmp/minikube-v1.16.0.3648144420.exe start -p running-upgrade-20220127031538-6703 --memory=2200 --vm-driver=docker --container-runtime=containerd
=== CONT TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.16.0.3648144420.exe start -p running-upgrade-20220127031538-6703 --memory=2200 --vm-driver=docker --container-runtime=containerd: (56.792626443s)
version_upgrade_test.go:137: (dbg) Run: out/minikube-linux-amd64 start -p running-upgrade-20220127031538-6703 --memory=2200 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd
=== CONT TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-20220127031538-6703 --memory=2200 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: exit status 81 (32.788270392s)
-- stdout --
* [running-upgrade-20220127031538-6703] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=13251
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Kubernetes 1.23.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.2
* Using the docker driver based on existing profile
- More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
- More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
* Starting control plane node running-upgrade-20220127031538-6703 in cluster running-upgrade-20220127031538-6703
* Pulling base image ...
* Updating the running docker "running-upgrade-20220127031538-6703" container ...
* Preparing Kubernetes v1.20.0 on containerd 1.4.3 ...
- kubelet.cni-conf-dir=/etc/cni/net.mk
-- /stdout --
** stderr **
I0127 03:16:35.819749 162215 out.go:297] Setting OutFile to fd 1 ...
I0127 03:16:35.819939 162215 out.go:344] TERM=,COLORTERM=, which probably does not support color
I0127 03:16:35.819966 162215 out.go:310] Setting ErrFile to fd 2...
I0127 03:16:35.819981 162215 out.go:344] TERM=,COLORTERM=, which probably does not support color
I0127 03:16:35.820190 162215 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
I0127 03:16:35.821201 162215 out.go:304] Setting JSON to false
I0127 03:16:35.822835 162215 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3550,"bootTime":1643249846,"procs":596,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1028-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0127 03:16:35.822921 162215 start.go:122] virtualization: kvm guest
I0127 03:16:35.825204 162215 out.go:176] * [running-upgrade-20220127031538-6703] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
I0127 03:16:35.826947 162215 out.go:176] - MINIKUBE_LOCATION=13251
I0127 03:16:35.825412 162215 notify.go:174] Checking for updates...
I0127 03:16:35.829030 162215 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0127 03:16:35.831959 162215 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
I0127 03:16:35.833528 162215 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
I0127 03:16:35.836136 162215 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64
I0127 03:16:35.836736 162215 config.go:176] Loaded profile config "running-upgrade-20220127031538-6703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0127 03:16:35.838774 162215 out.go:176] * Kubernetes 1.23.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.2
I0127 03:16:35.838811 162215 driver.go:344] Setting default libvirt URI to qemu:///system
I0127 03:16:35.909561 162215 docker.go:132] docker version: linux-20.10.12
I0127 03:16:35.909680 162215 cli_runner.go:133] Run: docker system info --format "{{json .}}"
I0127 03:16:36.068853 162215 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:57 SystemTime:2022-01-27 03:16:35.955547359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
I0127 03:16:36.068964 162215 docker.go:237] overlay module found
I0127 03:16:36.071599 162215 out.go:176] * Using the docker driver based on existing profile
I0127 03:16:36.071628 162215 start.go:281] selected driver: docker
I0127 03:16:36.071650 162215 start.go:798] validating driver "docker" against &{Name:running-upgrade-20220127031538-6703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:running-upgrade-20220127031538-6703 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.48 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false}
I0127 03:16:36.071758 162215 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
W0127 03:16:36.071791 162215 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W0127 03:16:36.071810 162215 out.go:241] ! Your cgroup does not allow setting memory.
! Your cgroup does not allow setting memory.
I0127 03:16:36.077711 162215 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
I0127 03:16:36.078361 162215 cli_runner.go:133] Run: docker system info --format "{{json .}}"
I0127 03:16:36.213112 162215 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:66 SystemTime:2022-01-27 03:16:36.120037128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
W0127 03:16:36.213277 162215 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W0127 03:16:36.213307 162215 out.go:241] ! Your cgroup does not allow setting memory.
! Your cgroup does not allow setting memory.
I0127 03:16:36.215327 162215 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
I0127 03:16:36.215442 162215 cni.go:93] Creating CNI manager for ""
I0127 03:16:36.215462 162215 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
I0127 03:16:36.215481 162215 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I0127 03:16:36.215490 162215 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I0127 03:16:36.215499 162215 start_flags.go:302] config:
{Name:running-upgrade-20220127031538-6703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:running-upgrade-20220127031538-6703 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.48 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false}
I0127 03:16:36.217500 162215 out.go:176] * Starting control plane node running-upgrade-20220127031538-6703 in cluster running-upgrade-20220127031538-6703
I0127 03:16:36.217551 162215 cache.go:120] Beginning downloading kic base image for docker with containerd
I0127 03:16:36.218982 162215 out.go:176] * Pulling base image ...
I0127 03:16:36.219023 162215 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0127 03:16:36.219058 162215 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.20.0-containerd-overlay2-amd64.tar.lz4
I0127 03:16:36.219070 162215 cache.go:57] Caching tarball of preloaded images
I0127 03:16:36.219128 162215 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 in local docker daemon
I0127 03:16:36.219324 162215 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.20.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0127 03:16:36.219336 162215 cache.go:60] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I0127 03:16:36.219481 162215 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/config.json ...
I0127 03:16:36.270518 162215 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 in local docker daemon, skipping pull
I0127 03:16:36.270551 162215 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 exists in daemon, skipping load
I0127 03:16:36.270572 162215 cache.go:208] Successfully downloaded all kic artifacts
I0127 03:16:36.270623 162215 start.go:313] acquiring machines lock for running-upgrade-20220127031538-6703: {Name:mkf1071b6262cf9755f15f8d9325a911dc32dfe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 03:16:36.270721 162215 start.go:317] acquired machines lock for "running-upgrade-20220127031538-6703" in 71.162µs
I0127 03:16:36.270757 162215 start.go:93] Skipping create...Using existing machine configuration
I0127 03:16:36.270768 162215 fix.go:55] fixHost starting:
I0127 03:16:36.271056 162215 cli_runner.go:133] Run: docker container inspect running-upgrade-20220127031538-6703 --format={{.State.Status}}
I0127 03:16:36.310104 162215 fix.go:108] recreateIfNeeded on running-upgrade-20220127031538-6703: state=Running err=<nil>
W0127 03:16:36.310134 162215 fix.go:134] unexpected machine state, will restart: <nil>
I0127 03:16:36.313029 162215 out.go:176] * Updating the running docker "running-upgrade-20220127031538-6703" container ...
I0127 03:16:36.313080 162215 machine.go:88] provisioning docker machine ...
I0127 03:16:36.313104 162215 ubuntu.go:169] provisioning hostname "running-upgrade-20220127031538-6703"
I0127 03:16:36.313161 162215 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220127031538-6703
I0127 03:16:36.357843 162215 main.go:130] libmachine: Using SSH client type: native
I0127 03:16:36.358054 162215 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil> [] 0s} 127.0.0.1 49332 <nil> <nil>}
I0127 03:16:36.358078 162215 main.go:130] libmachine: About to run SSH command:
sudo hostname running-upgrade-20220127031538-6703 && echo "running-upgrade-20220127031538-6703" | sudo tee /etc/hostname
I0127 03:16:36.496859 162215 main.go:130] libmachine: SSH cmd err, output: <nil>: running-upgrade-20220127031538-6703
I0127 03:16:36.496934 162215 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220127031538-6703
I0127 03:16:36.545848 162215 main.go:130] libmachine: Using SSH client type: native
I0127 03:16:36.546027 162215 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil> [] 0s} 127.0.0.1 49332 <nil> <nil>}
I0127 03:16:36.546060 162215 main.go:130] libmachine: About to run SSH command:
if ! grep -xq '.*\srunning-upgrade-20220127031538-6703' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-20220127031538-6703/g' /etc/hosts;
else
echo '127.0.1.1 running-upgrade-20220127031538-6703' | sudo tee -a /etc/hosts;
fi
fi
I0127 03:16:36.674975 162215 main.go:130] libmachine: SSH cmd err, output: <nil>:
I0127 03:16:36.675000 162215 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube}
I0127 03:16:36.675032 162215 ubuntu.go:177] setting up certificates
I0127 03:16:36.675041 162215 provision.go:83] configureAuth start
I0127 03:16:36.675085 162215 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-20220127031538-6703
I0127 03:16:36.717780 162215 provision.go:138] copyHostCerts
I0127 03:16:36.717873 162215 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.pem, removing ...
I0127 03:16:36.717900 162215 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.pem
I0127 03:16:36.717972 162215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.pem (1078 bytes)
I0127 03:16:36.718112 162215 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cert.pem, removing ...
I0127 03:16:36.718137 162215 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cert.pem
I0127 03:16:36.718168 162215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cert.pem (1123 bytes)
I0127 03:16:36.718259 162215 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/key.pem, removing ...
I0127 03:16:36.718270 162215 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/key.pem
I0127 03:16:36.718295 162215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/key.pem (1675 bytes)
I0127 03:16:36.718384 162215 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-20220127031538-6703 san=[192.168.59.48 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-20220127031538-6703]
I0127 03:16:36.901757 162215 provision.go:172] copyRemoteCerts
I0127 03:16:36.901825 162215 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0127 03:16:36.901899 162215 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220127031538-6703
I0127 03:16:36.946556 162215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49332 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/running-upgrade-20220127031538-6703/id_rsa Username:docker}
I0127 03:16:37.039828 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0127 03:16:37.059790 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
I0127 03:16:37.077985 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0127 03:16:37.095662 162215 provision.go:86] duration metric: configureAuth took 420.607228ms
I0127 03:16:37.095728 162215 ubuntu.go:193] setting minikube options for container-runtime
I0127 03:16:37.095950 162215 config.go:176] Loaded profile config "running-upgrade-20220127031538-6703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0127 03:16:37.095969 162215 machine.go:91] provisioned docker machine in 782.877543ms
I0127 03:16:37.095979 162215 start.go:267] post-start starting for "running-upgrade-20220127031538-6703" (driver="docker")
I0127 03:16:37.096009 162215 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0127 03:16:37.096057 162215 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0127 03:16:37.096106 162215 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220127031538-6703
I0127 03:16:37.136145 162215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49332 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/running-upgrade-20220127031538-6703/id_rsa Username:docker}
I0127 03:16:37.226055 162215 ssh_runner.go:195] Run: cat /etc/os-release
I0127 03:16:37.229270 162215 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0127 03:16:37.229308 162215 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0127 03:16:37.229321 162215 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0127 03:16:37.229327 162215 info.go:137] Remote host: Ubuntu 20.04.1 LTS
I0127 03:16:37.229338 162215 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/addons for local assets ...
I0127 03:16:37.229401 162215 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files for local assets ...
I0127 03:16:37.229495 162215 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/67032.pem -> 67032.pem in /etc/ssl/certs
I0127 03:16:37.229599 162215 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0127 03:16:37.238159 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/67032.pem --> /etc/ssl/certs/67032.pem (1708 bytes)
I0127 03:16:37.258270 162215 start.go:270] post-start completed in 162.254477ms
I0127 03:16:37.258334 162215 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0127 03:16:37.258369 162215 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220127031538-6703
I0127 03:16:37.293726 162215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49332 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/running-upgrade-20220127031538-6703/id_rsa Username:docker}
I0127 03:16:37.383499 162215 fix.go:57] fixHost completed within 1.112724956s
I0127 03:16:37.383548 162215 start.go:80] releasing machines lock for "running-upgrade-20220127031538-6703", held for 1.112806432s
I0127 03:16:37.383654 162215 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-20220127031538-6703
I0127 03:16:37.418044 162215 ssh_runner.go:195] Run: systemctl --version
I0127 03:16:37.418094 162215 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220127031538-6703
I0127 03:16:37.418107 162215 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0127 03:16:37.418170 162215 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220127031538-6703
I0127 03:16:37.454791 162215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49332 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/running-upgrade-20220127031538-6703/id_rsa Username:docker}
I0127 03:16:37.465223 162215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49332 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/running-upgrade-20220127031538-6703/id_rsa Username:docker}
I0127 03:16:37.551241 162215 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0127 03:16:37.573143 162215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0127 03:16:37.581870 162215 docker.go:183] disabling docker service ...
I0127 03:16:37.581924 162215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0127 03:16:37.597745 162215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0127 03:16:37.606600 162215 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0127 03:16:37.698698 162215 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0127 03:16:37.774854 162215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0127 03:16:37.784029 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0127 03:16:37.799704 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1
fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My4yIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10
KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9
kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
I0127 03:16:37.814976 162215 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0127 03:16:37.821033 162215 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0127 03:16:37.827506 162215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 03:16:37.916718 162215 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0127 03:16:38.045659 162215 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
I0127 03:16:38.045738 162215 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0127 03:16:38.050251 162215 start.go:462] Will wait 60s for crictl version
I0127 03:16:38.050318 162215 ssh_runner.go:195] Run: sudo crictl version
I0127 03:16:38.078541 162215 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:
stderr:
time="2022-01-27T03:16:38Z" level=fatal msg="getting the runtime version failed: rpc error: code = Unknown desc = server is not initialized yet"
I0127 03:16:49.127192 162215 ssh_runner.go:195] Run: sudo crictl version
I0127 03:16:49.142397 162215 start.go:471] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.4.3
RuntimeApiVersion: v1alpha2
I0127 03:16:49.142463 162215 ssh_runner.go:195] Run: containerd --version
I0127 03:16:49.171121 162215 ssh_runner.go:195] Run: containerd --version
I0127 03:16:49.234691 162215 out.go:176] * Preparing Kubernetes v1.20.0 on containerd 1.4.3 ...
I0127 03:16:49.234764 162215 cli_runner.go:133] Run: docker network inspect running-upgrade-20220127031538-6703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0127 03:16:49.273789 162215 ssh_runner.go:195] Run: grep 192.168.59.1 host.minikube.internal$ /etc/hosts
I0127 03:16:49.305738 162215 out.go:176] - kubelet.cni-conf-dir=/etc/cni/net.mk
I0127 03:16:49.305821 162215 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0127 03:16:49.305896 162215 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 03:16:49.324115 162215 containerd.go:608] couldn't find preloaded image for "gcr.io/k8s-minikube/storage-provisioner:v5". assuming images are not preloaded.
I0127 03:16:49.324182 162215 ssh_runner.go:195] Run: which lz4
I0127 03:16:49.328061 162215 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0127 03:16:49.331530 162215 ssh_runner.go:352] existence check for /preloaded.tar.lz4: source file and destination file are different sizes
I0127 03:16:49.331587 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.20.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (582465074 bytes)
I0127 03:16:52.346824 162215 containerd.go:555] Took 3.018789 seconds to copy over tarball
I0127 03:16:52.346914 162215 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0127 03:17:03.570053 162215 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (11.223111505s)
I0127 03:17:03.570082 162215 containerd.go:562] Took 11.223225 seconds t extract the tarball
I0127 03:17:03.570093 162215 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0127 03:17:03.660335 162215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 03:17:03.871641 162215 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0127 03:17:04.105253 162215 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 03:17:04.124625 162215 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.20.0 k8s.gcr.io/kube-controller-manager:v1.20.0 k8s.gcr.io/kube-scheduler:v1.20.0 k8s.gcr.io/kube-proxy:v1.20.0 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.3.1 docker.io/kubernetesui/metrics-scraper:v1.0.7]
I0127 03:17:04.124734 162215 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7
I0127 03:17:04.124936 162215 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.20.0
I0127 03:17:04.125040 162215 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.20.0
I0127 03:17:04.125150 162215 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.20.0
I0127 03:17:04.125243 162215 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.20.0
I0127 03:17:04.125465 162215 image.go:134] retrieving image: k8s.gcr.io/pause:3.2
I0127 03:17:04.125633 162215 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.13-0
I0127 03:17:04.125741 162215 image.go:134] retrieving image: k8s.gcr.io/coredns:1.7.0
I0127 03:17:04.125832 162215 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0127 03:17:04.125921 162215 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1
I0127 03:17:04.127466 162215 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: reference does not exist
I0127 03:17:04.127977 162215 image.go:180] daemon lookup for docker.io/kubernetesui/dashboard:v2.3.1: Error response from daemon: reference does not exist
I0127 03:17:04.128109 162215 image.go:180] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: Error response from daemon: reference does not exist
I0127 03:17:04.128136 162215 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.4.13-0: Error response from daemon: reference does not exist
I0127 03:17:04.128278 162215 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.20.0: Error response from daemon: reference does not exist
I0127 03:17:04.128413 162215 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.20.0: Error response from daemon: reference does not exist
I0127 03:17:04.128440 162215 image.go:180] daemon lookup for k8s.gcr.io/pause:3.2: Error response from daemon: reference does not exist
I0127 03:17:04.128545 162215 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.20.0: Error response from daemon: reference does not exist
I0127 03:17:04.128565 162215 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.20.0: Error response from daemon: reference does not exist
I0127 03:17:04.128669 162215 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.7.0: Error response from daemon: reference does not exist
I0127 03:17:04.418344 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.20.0"
I0127 03:17:04.419139 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.20.0"
I0127 03:17:04.424426 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.4.13-0"
I0127 03:17:04.427553 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns:1.7.0"
I0127 03:17:04.443893 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.20.0"
I0127 03:17:04.444493 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.20.0"
I0127 03:17:04.465910 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
I0127 03:17:04.509961 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.2"
I0127 03:17:05.015592 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/metrics-scraper:v1.0.7"
I0127 03:17:05.022587 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/dashboard:v2.3.1"
I0127 03:17:05.309789 162215 cache_images.go:123] Successfully loaded all cached images
I0127 03:17:05.309814 162215 cache_images.go:92] LoadImages completed in 1.185159822s
I0127 03:17:05.309874 162215 ssh_runner.go:195] Run: sudo crictl info
I0127 03:17:05.330052 162215 cni.go:93] Creating CNI manager for ""
I0127 03:17:05.330072 162215 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
I0127 03:17:05.330083 162215 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0127 03:17:05.330095 162215 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.59.48 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-20220127031538-6703 NodeName:running-upgrade-20220127031538-6703 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.59.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.59.48 CgroupDriver:cgrou
pfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0127 03:17:05.330263 162215 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.59.48
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "running-upgrade-20220127031538-6703"
kubeletExtraArgs:
node-ip: 192.168.59.48
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.59.48"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0127 03:17:05.330371 162215 kubeadm.go:791] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=running-upgrade-20220127031538-6703 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.59.48 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:running-upgrade-20220127031538-6703 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0127 03:17:05.330429 162215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0127 03:17:05.339921 162215 binaries.go:44] Found k8s binaries, skipping transfer
I0127 03:17:05.339997 162215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0127 03:17:05.347885 162215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (581 bytes)
I0127 03:17:05.363235 162215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0127 03:17:05.408320 162215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2088 bytes)
I0127 03:17:05.432119 162215 ssh_runner.go:195] Run: grep 192.168.59.48 control-plane.minikube.internal$ /etc/hosts
I0127 03:17:05.436673 162215 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703 for IP: 192.168.59.48
I0127 03:17:05.436796 162215 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.key
I0127 03:17:05.436850 162215 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.key
I0127 03:17:05.436973 162215 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/client.key
I0127 03:17:05.437053 162215 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/apiserver.key.fc40ab25
I0127 03:17:05.437109 162215 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/proxy-client.key
I0127 03:17:05.437225 162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/6703.pem (1338 bytes)
W0127 03:17:05.437268 162215 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/6703_empty.pem, impossibly tiny 0 bytes
I0127 03:17:05.437284 162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca-key.pem (1675 bytes)
I0127 03:17:05.437316 162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem (1078 bytes)
I0127 03:17:05.437342 162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem (1123 bytes)
I0127 03:17:05.437364 162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/key.pem (1675 bytes)
I0127 03:17:05.437417 162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/67032.pem (1708 bytes)
I0127 03:17:05.438513 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0127 03:17:05.461440 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0127 03:17:05.516402 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0127 03:17:05.537977 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0127 03:17:05.561369 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0127 03:17:05.629987 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0127 03:17:05.657572 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0127 03:17:05.724754 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0127 03:17:05.808160 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/67032.pem --> /usr/share/ca-certificates/67032.pem (1708 bytes)
I0127 03:17:05.914815 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0127 03:17:05.937255 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/6703.pem --> /usr/share/ca-certificates/6703.pem (1338 bytes)
I0127 03:17:05.960381 162215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0127 03:17:06.036900 162215 ssh_runner.go:195] Run: openssl version
I0127 03:17:06.042924 162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67032.pem && ln -fs /usr/share/ca-certificates/67032.pem /etc/ssl/certs/67032.pem"
I0127 03:17:06.064802 162215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67032.pem
I0127 03:17:06.068353 162215 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 27 02:47 /usr/share/ca-certificates/67032.pem
I0127 03:17:06.068400 162215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67032.pem
I0127 03:17:06.074066 162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67032.pem /etc/ssl/certs/3ec20f2e.0"
I0127 03:17:06.104715 162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0127 03:17:06.112573 162215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0127 03:17:06.115981 162215 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 27 02:42 /usr/share/ca-certificates/minikubeCA.pem
I0127 03:17:06.116036 162215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0127 03:17:06.120843 162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0127 03:17:06.127821 162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6703.pem && ln -fs /usr/share/ca-certificates/6703.pem /etc/ssl/certs/6703.pem"
I0127 03:17:06.136027 162215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6703.pem
I0127 03:17:06.139199 162215 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 27 02:47 /usr/share/ca-certificates/6703.pem
I0127 03:17:06.139245 162215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6703.pem
I0127 03:17:06.144267 162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6703.pem /etc/ssl/certs/51391683.0"
I0127 03:17:06.151276 162215 kubeadm.go:388] StartCluster: {Name:running-upgrade-20220127031538-6703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:running-upgrade-20220127031538-6703 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.48 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false}
I0127 03:17:06.151365 162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0127 03:17:06.151395 162215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 03:17:06.167608 162215 cri.go:87] found id: ""
I0127 03:17:06.167655 162215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0127 03:17:06.207586 162215 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0127 03:17:06.215687 162215 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0127 03:17:06.216403 162215 kubeconfig.go:116] verify returned: extract IP: "running-upgrade-20220127031538-6703" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
I0127 03:17:06.216619 162215 kubeconfig.go:127] "running-upgrade-20220127031538-6703" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig - will repair!
I0127 03:17:06.217195 162215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig: {Name:mk52def711e0760588c8e7c9e046110fe006e484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 03:17:06.241403 162215 kapi.go:59] client config for running-upgrade-20220127031538-6703: &rest.Config{Host:"https://192.168.59.48:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/runn
ing-upgrade-20220127031538-6703/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x15da7e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0127 03:17:06.243260 162215 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0127 03:17:06.251876 162215 kubeadm.go:593] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml 2022-01-27 03:16:10.898540450 +0000
+++ /var/tmp/minikube/kubeadm.yaml.new 2022-01-27 03:17:05.423671678 +0000
@@ -65,4 +65,10 @@
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
-metricsBindAddress: 192.168.59.48:10249
+metricsBindAddress: 0.0.0.0:10249
+conntrack:
+ maxPerCore: 0
+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
+ tcpEstablishedTimeout: 0s
+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
+ tcpCloseWaitTimeout: 0s
-- /stdout --
I0127 03:17:06.251922 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0127 03:17:06.932139 162215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 03:17:06.943350 162215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 03:17:06.959832 162215 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
I0127 03:17:06.959882 162215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 03:17:06.968121 162215 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 03:17:06.968171 162215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
W0127 03:17:07.446011 162215 out.go:241] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.11.0-1028-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.11.0-1028-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-8443]: Port 8443 is in use
[ERROR Port-10259]: Port 10259 is in use
[ERROR Port-10257]: Port 10257 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.11.0-1028-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.11.0-1028-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-8443]: Port 8443 is in use
[ERROR Port-10259]: Port 10259 is in use
[ERROR Port-10257]: Port 10257 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
I0127 03:17:07.446055 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0127 03:17:07.512319 162215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 03:17:07.522160 162215 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
I0127 03:17:07.522213 162215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 03:17:07.529324 162215 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 03:17:07.529370 162215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0127 03:17:07.690870 162215 kubeadm.go:390] StartCluster complete in 1.539598545s
I0127 03:17:07.690939 162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0127 03:17:07.690986 162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0127 03:17:07.704611 162215 cri.go:87] found id: ""
I0127 03:17:07.704637 162215 logs.go:274] 0 containers: []
W0127 03:17:07.704645 162215 logs.go:276] No container was found matching "kube-apiserver"
I0127 03:17:07.704668 162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0127 03:17:07.704745 162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0127 03:17:07.717933 162215 cri.go:87] found id: ""
I0127 03:17:07.717961 162215 logs.go:274] 0 containers: []
W0127 03:17:07.717971 162215 logs.go:276] No container was found matching "etcd"
I0127 03:17:07.717979 162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0127 03:17:07.718026 162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0127 03:17:07.731059 162215 cri.go:87] found id: ""
I0127 03:17:07.731079 162215 logs.go:274] 0 containers: []
W0127 03:17:07.731085 162215 logs.go:276] No container was found matching "coredns"
I0127 03:17:07.731090 162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0127 03:17:07.731152 162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0127 03:17:07.745381 162215 cri.go:87] found id: ""
I0127 03:17:07.745402 162215 logs.go:274] 0 containers: []
W0127 03:17:07.745408 162215 logs.go:276] No container was found matching "kube-scheduler"
I0127 03:17:07.745417 162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0127 03:17:07.745455 162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0127 03:17:07.762094 162215 cri.go:87] found id: ""
I0127 03:17:07.762125 162215 logs.go:274] 0 containers: []
W0127 03:17:07.762133 162215 logs.go:276] No container was found matching "kube-proxy"
I0127 03:17:07.762142 162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0127 03:17:07.762183 162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0127 03:17:07.775553 162215 cri.go:87] found id: ""
I0127 03:17:07.775580 162215 logs.go:274] 0 containers: []
W0127 03:17:07.775586 162215 logs.go:276] No container was found matching "kubernetes-dashboard"
I0127 03:17:07.775591 162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0127 03:17:07.775638 162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0127 03:17:07.789738 162215 cri.go:87] found id: ""
I0127 03:17:07.789766 162215 logs.go:274] 0 containers: []
W0127 03:17:07.789774 162215 logs.go:276] No container was found matching "storage-provisioner"
I0127 03:17:07.789782 162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0127 03:17:07.789830 162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0127 03:17:07.803044 162215 cri.go:87] found id: ""
I0127 03:17:07.803071 162215 logs.go:274] 0 containers: []
W0127 03:17:07.803078 162215 logs.go:276] No container was found matching "kube-controller-manager"
I0127 03:17:07.803086 162215 logs.go:123] Gathering logs for kubelet ...
I0127 03:17:07.803117 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0127 03:17:07.895271 162215 logs.go:123] Gathering logs for dmesg ...
I0127 03:17:07.895305 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0127 03:17:07.915018 162215 logs.go:123] Gathering logs for describe nodes ...
I0127 03:17:07.915058 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0127 03:17:08.197863 162215 logs.go:123] Gathering logs for containerd ...
I0127 03:17:08.197892 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0127 03:17:08.257172 162215 logs.go:123] Gathering logs for container status ...
I0127 03:17:08.257213 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0127 03:17:08.275508 162215 out.go:370] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.11.0-1028-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.11.0-1028-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-8443]: Port 8443 is in use
[ERROR Port-10259]: Port 10259 is in use
[ERROR Port-10257]: Port 10257 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W0127 03:17:08.275548 162215 out.go:241] *
*
W0127 03:17:08.275688 162215 out.go:241] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.11.0-1028-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.11.0-1028-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-8443]: Port 8443 is in use
[ERROR Port-10259]: Port 10259 is in use
[ERROR Port-10257]: Port 10257 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.11.0-1028-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.11.0-1028-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-8443]: Port 8443 is in use
[ERROR Port-10259]: Port 10259 is in use
[ERROR Port-10257]: Port 10257 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W0127 03:17:08.275702 162215 out.go:241] *
*
W0127 03:17:08.276469 162215 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0127 03:17:08.396612 162215 out.go:176]
W0127 03:17:08.396806 162215 out.go:241] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.11.0-1028-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.11.0-1028-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-8443]: Port 8443 is in use
[ERROR Port-10259]: Port 10259 is in use
[ERROR Port-10257]: Port 10257 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.11.0-1028-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.11.0-1028-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-8443]: Port 8443 is in use
[ERROR Port-10259]: Port 10259 is in use
[ERROR Port-10257]: Port 10257 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W0127 03:17:08.396919 162215 out.go:241] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
* Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
W0127 03:17:08.396990 162215 out.go:241] * Related issue: https://github.com/kubernetes/minikube/issues/5484
* Related issue: https://github.com/kubernetes/minikube/issues/5484
I0127 03:17:08.516534 162215 out.go:176]
** /stderr **
version_upgrade_test.go:139: upgrade from v1.16.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-20220127031538-6703 --memory=2200 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: exit status 81
panic.go:642: *** TestRunningBinaryUpgrade FAILED at 2022-01-27 03:17:08.555545289 +0000 UTC m=+2113.291999322
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======> post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run: docker inspect running-upgrade-20220127031538-6703
helpers_test.go:236: (dbg) docker inspect running-upgrade-20220127031538-6703:
-- stdout --
[
{
"Id": "0cb3caba465d90ba761139b096e38c77136ee387b2b35553313a894675937411",
"Created": "2022-01-27T03:15:46.866746988Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 152601,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-01-27T03:15:47.358745002Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:06db6ca724463f987019154e0475424113315da76733d5b67f90e35719d46c4d",
"ResolvConfPath": "/var/lib/docker/containers/0cb3caba465d90ba761139b096e38c77136ee387b2b35553313a894675937411/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/0cb3caba465d90ba761139b096e38c77136ee387b2b35553313a894675937411/hostname",
"HostsPath": "/var/lib/docker/containers/0cb3caba465d90ba761139b096e38c77136ee387b2b35553313a894675937411/hosts",
"LogPath": "/var/lib/docker/containers/0cb3caba465d90ba761139b096e38c77136ee387b2b35553313a894675937411/0cb3caba465d90ba761139b096e38c77136ee387b2b35553313a894675937411-json.log",
"Name": "/running-upgrade-20220127031538-6703",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"running-upgrade-20220127031538-6703:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "running-upgrade-20220127031538-6703",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 2306867200,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/b7fa8ba6ff656bada769e1e60c3267ede44cf41f11cdb46e8a8c6e3b71f2b6fd-init/diff:/var/lib/docker/overlay2/48a6afa5e0a9516ce4dc1f5459b529e8154283097947fb2da9335c65368c5887/diff:/var/lib/docker/overlay2/6dbceb9cc216ca99567fbf9a5bf1fc96d700503aa51960c28af3924c1efb03c7/diff:/var/lib/docker/overlay2/a86b843001e156f716be5143b37e71ed5928e4e10d99bed21cf3773483ea17c5/diff:/var/lib/docker/overlay2/3eac27006d6f92241d2f42ba10eca71a5d0f90648fa7c4aa9ae73b759f6df770/diff:/var/lib/docker/overlay2/d15ced90c80ff8677732f9d26eb292c2a4e1545c26588d4544070b458501653c/diff:/var/lib/docker/overlay2/45645fdb8a2923b75e5b012368356441bb80262933cb1e2bcb8ffe658b8f9a45/diff:/var/lib/docker/overlay2/ac170811b40c36c35c27809bd81b8c23ce8661ddd82d1947f98485abff72bd4b/diff:/var/lib/docker/overlay2/8efa8472f00aa9bedc29412758a3b398d87ea0dc92476662a2e0344c46c663b9/diff:/var/lib/docker/overlay2/1164683f8b88eae06c95e6b3804f088d2035c727df3e9c456b05044372ee383d/diff:/var/lib/docker/overlay2/740b74
b4b91e2781b4e6d8521c4da1d332c4916d7d79383b77c1a2ddab8ccd2e/diff:/var/lib/docker/overlay2/ea509413497cba005bd19f179302c5f08d095f1c5c9a3bbfbb21850e19e3390c/diff:/var/lib/docker/overlay2/3c66ffb89b0b641c530714389ea38e6de8efeda792c328693cfc2194c3193b60/diff:/var/lib/docker/overlay2/5207a1c75b52f7376eda1627ba66a9240792a3fa96d186014d0d02d9adf57e9c/diff:/var/lib/docker/overlay2/c6eba072681c5d8947855204f792a0030cec1970639e088b12a99d23512cf8e3/diff:/var/lib/docker/overlay2/f1ae6aa616c8e759801078bd2bf4dfff76a2418756948c43bada9f1c0484860c/diff:/var/lib/docker/overlay2/97545af48f6dc52660e45e0dae9d498defbd2c20715fd9dc74c7ce304ba67043/diff:/var/lib/docker/overlay2/1941873d8cc5ec600b1f778c22cda64d688bd48ff81f335f7727860c8e718072/diff:/var/lib/docker/overlay2/b03d5c7215d2284a9c22cca30cdd66f610c8f3842b6bf8c1e4225305acc1eb39/diff:/var/lib/docker/overlay2/a857bd38deffdc9de25ba148039b1a3d4aca58d09ee4c67de63ec29d7a83bb9d/diff:/var/lib/docker/overlay2/c45a1482c32587e155ef7e149ea399b10ab07a265579d73d7730a4c3d4847cc5/diff:/var/lib/d
ocker/overlay2/15565ecef5e596a7aad526fb8d7e925b284014279fcb4226f560c1b8ad45ad35/diff:/var/lib/docker/overlay2/202a1c7df018d3dd5942d52372ccef41da658966eb357ad69f6608f3b027d321/diff:/var/lib/docker/overlay2/59b70058325819e20b0bebbc70846cf1fcbe95ea576cc28cffc14a82a9402ca4/diff:/var/lib/docker/overlay2/1230ef6cb66210a5a62715c30556d5f9251e81d7cee67df68886be81910c7db6/diff:/var/lib/docker/overlay2/46b452e38aae1d4280f874acff6cdacdde65a9d1785a0de0af581b218d3a2b26/diff:/var/lib/docker/overlay2/0a29c1731383192b00674d178bd378566a561c251e78a10f2de04721db311879/diff:/var/lib/docker/overlay2/7758341c3a0ab19235e017f7e88be25f37e1e2a263508513aaccd553cc6fb880/diff:/var/lib/docker/overlay2/42c9967b3df8c320f21f059303cc152fcc0583228cc80106572855ae7fbb87ae/diff:/var/lib/docker/overlay2/a2f0d15380d2fb22943e2441b85052c87e6cae06d9ebd665ecab557dc71e355f/diff:/var/lib/docker/overlay2/71af46fa98e949cffe4368e1331392f4fa3a1ac9bb236678c6ea9ea99ad637aa/diff:/var/lib/docker/overlay2/80be004ea477d9004f0ea34e691d11dcdccdb2e607fdbae42afa4486e72
676db/diff:/var/lib/docker/overlay2/c77a99aeb6fea504fe63df31ba6bcdbba041a5e911f9e53fa3ac5ff6e3656895/diff:/var/lib/docker/overlay2/11124e797e5aaba260680c1fb60457fa47062fb5868fad94b44d706c4a449ab0/diff:/var/lib/docker/overlay2/619cb3f0df642cae9ac698b34b205f59683496e42a3a232e0cc09baada9272d7/diff",
"MergedDir": "/var/lib/docker/overlay2/b7fa8ba6ff656bada769e1e60c3267ede44cf41f11cdb46e8a8c6e3b71f2b6fd/merged",
"UpperDir": "/var/lib/docker/overlay2/b7fa8ba6ff656bada769e1e60c3267ede44cf41f11cdb46e8a8c6e3b71f2b6fd/diff",
"WorkDir": "/var/lib/docker/overlay2/b7fa8ba6ff656bada769e1e60c3267ede44cf41f11cdb46e8a8c6e3b71f2b6fd/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "running-upgrade-20220127031538-6703",
"Source": "/var/lib/docker/volumes/running-upgrade-20220127031538-6703/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "running-upgrade-20220127031538-6703",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "running-upgrade-20220127031538-6703",
"name.minikube.sigs.k8s.io": "running-upgrade-20220127031538-6703",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "501cbd82eb0b223dfe8e2ffb54957946ede035476125e2fb4385688067488a76",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49332"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49331"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49330"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49329"
}
]
},
"SandboxKey": "/var/run/docker/netns/501cbd82eb0b",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"running-upgrade-20220127031538-6703": {
"IPAMConfig": {
"IPv4Address": "192.168.59.48"
},
"Links": null,
"Aliases": [
"0cb3caba465d",
"running-upgrade-20220127031538-6703"
],
"NetworkID": "71ffccd776eccfd25cebeec3d1662202d2e6983928d0b58895669eb96579526e",
"EndpointID": "71bc31de786f2fd0a2f5e2011547b7e79b80286df8b81f4e764b67f0b37b1ac4",
"Gateway": "192.168.59.1",
"IPAddress": "192.168.59.48",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:3b:30",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-20220127031538-6703 -n running-upgrade-20220127031538-6703
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-20220127031538-6703 -n running-upgrade-20220127031538-6703: exit status 2 (425.081478ms)
-- stdout --
Running
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======> post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p running-upgrade-20220127031538-6703 logs -n 25
E0127 03:17:09.641906 6703 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/ingress-addon-legacy-20220127025035-6703/client.crt: no such file or directory
=== CONT TestRunningBinaryUpgrade
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p running-upgrade-20220127031538-6703 logs -n 25: (1.788056316s)
helpers_test.go:253: TestRunningBinaryUpgrade logs:
-- stdout --
*
* ==> Audit <==
* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
| start | -p | NoKubernetes-20220127031151-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:13:20 UTC | Thu, 27 Jan 2022 03:13:29 UTC |
| | NoKubernetes-20220127031151-6703 | | | | | |
| | --no-kubernetes --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| profile | list | minikube | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:13:30 UTC | Thu, 27 Jan 2022 03:13:30 UTC |
| profile | list --output=json | minikube | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:13:31 UTC | Thu, 27 Jan 2022 03:13:31 UTC |
| stop | -p | NoKubernetes-20220127031151-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:13:31 UTC | Thu, 27 Jan 2022 03:13:33 UTC |
| | NoKubernetes-20220127031151-6703 | | | | | |
| start | -p | NoKubernetes-20220127031151-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:13:33 UTC | Thu, 27 Jan 2022 03:13:38 UTC |
| | NoKubernetes-20220127031151-6703 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | NoKubernetes-20220127031151-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:13:39 UTC | Thu, 27 Jan 2022 03:13:42 UTC |
| | NoKubernetes-20220127031151-6703 | | | | | |
| start | -p | kubernetes-upgrade-20220127031320-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:13:20 UTC | Thu, 27 Jan 2022 03:14:26 UTC |
| | kubernetes-upgrade-20220127031320-6703 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --alsologtostderr -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| stop | -p | kubernetes-upgrade-20220127031320-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:14:26 UTC | Thu, 27 Jan 2022 03:14:52 UTC |
| | kubernetes-upgrade-20220127031320-6703 | | | | | |
| start | -p | missing-upgrade-20220127031307-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:14:26 UTC | Thu, 27 Jan 2022 03:15:33 UTC |
| | missing-upgrade-20220127031307-6703 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p | stopped-upgrade-20220127031342-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:14:22 UTC | Thu, 27 Jan 2022 03:15:37 UTC |
| | stopped-upgrade-20220127031342-6703 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| logs | -p | stopped-upgrade-20220127031342-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:15:37 UTC | Thu, 27 Jan 2022 03:15:38 UTC |
| | stopped-upgrade-20220127031342-6703 | | | | | |
| delete | -p | missing-upgrade-20220127031307-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:15:33 UTC | Thu, 27 Jan 2022 03:15:38 UTC |
| | missing-upgrade-20220127031307-6703 | | | | | |
| delete | -p | stopped-upgrade-20220127031342-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:15:38 UTC | Thu, 27 Jan 2022 03:15:41 UTC |
| | stopped-upgrade-20220127031342-6703 | | | | | |
| start | -p | kubernetes-upgrade-20220127031320-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:14:52 UTC | Thu, 27 Jan 2022 03:16:01 UTC |
| | kubernetes-upgrade-20220127031320-6703 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.23.3-rc.0 | | | | | |
| | --alsologtostderr -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p | cert-expiration-20220127031151-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:16:02 UTC | Thu, 27 Jan 2022 03:16:18 UTC |
| | cert-expiration-20220127031151-6703 | | | | | |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | cert-expiration-20220127031151-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:16:19 UTC | Thu, 27 Jan 2022 03:16:22 UTC |
| | cert-expiration-20220127031151-6703 | | | | | |
| delete | -p kubenet-20220127031622-6703 | kubenet-20220127031622-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:16:22 UTC | Thu, 27 Jan 2022 03:16:23 UTC |
| delete | -p flannel-20220127031623-6703 | flannel-20220127031623-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:16:23 UTC | Thu, 27 Jan 2022 03:16:23 UTC |
| delete | -p false-20220127031623-6703 | false-20220127031623-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:16:24 UTC | Thu, 27 Jan 2022 03:16:24 UTC |
| start | -p pause-20220127031541-6703 | pause-20220127031541-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:15:41 UTC | Thu, 27 Jan 2022 03:16:45 UTC |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=all --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p | kubernetes-upgrade-20220127031320-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:16:02 UTC | Thu, 27 Jan 2022 03:16:47 UTC |
| | kubernetes-upgrade-20220127031320-6703 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.23.3-rc.0 | | | | | |
| | --alsologtostderr -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | kubernetes-upgrade-20220127031320-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:16:48 UTC | Thu, 27 Jan 2022 03:16:51 UTC |
| | kubernetes-upgrade-20220127031320-6703 | | | | | |
| start | -p pause-20220127031541-6703 | pause-20220127031541-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:16:45 UTC | Thu, 27 Jan 2022 03:17:02 UTC |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| pause | -p pause-20220127031541-6703 | pause-20220127031541-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:17:02 UTC | Thu, 27 Jan 2022 03:17:03 UTC |
| | --alsologtostderr -v=5 | | | | | |
| unpause | -p pause-20220127031541-6703 | pause-20220127031541-6703 | jenkins | v1.25.1 | Thu, 27 Jan 2022 03:17:04 UTC | Thu, 27 Jan 2022 03:17:04 UTC |
| | --alsologtostderr -v=5 | | | | | |
|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
*
* ==> Last Start <==
* Log file created at: 2022/01/27 03:16:55
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.17.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0127 03:16:55.321171 165385 out.go:297] Setting OutFile to fd 1 ...
I0127 03:16:55.321267 165385 out.go:344] TERM=,COLORTERM=, which probably does not support color
I0127 03:16:55.321270 165385 out.go:310] Setting ErrFile to fd 2...
I0127 03:16:55.321274 165385 out.go:344] TERM=,COLORTERM=, which probably does not support color
I0127 03:16:55.321417 165385 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/bin
I0127 03:16:55.321825 165385 out.go:304] Setting JSON to false
I0127 03:16:55.323662 165385 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3570,"bootTime":1643249846,"procs":600,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1028-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0127 03:16:55.323747 165385 start.go:122] virtualization: kvm guest
I0127 03:16:55.457507 165385 out.go:176] * [cert-options-20220127031655-6703] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
I0127 03:16:55.457731 165385 notify.go:174] Checking for updates...
I0127 03:16:55.557841 165385 out.go:176] - MINIKUBE_LOCATION=13251
I0127 03:16:55.658093 165385 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0127 03:16:55.705301 165385 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
I0127 03:16:52.346824 162215 containerd.go:555] Took 3.018789 seconds to copy over tarball
I0127 03:16:52.346914 162215 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0127 03:16:55.797337 165385 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube
I0127 03:16:55.928582 165385 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64
I0127 03:16:55.929331 165385 config.go:176] Loaded profile config "force-systemd-flag-20220127031624-6703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2
I0127 03:16:55.929474 165385 config.go:176] Loaded profile config "pause-20220127031541-6703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2
I0127 03:16:55.929604 165385 config.go:176] Loaded profile config "running-upgrade-20220127031538-6703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0127 03:16:55.929654 165385 driver.go:344] Setting default libvirt URI to qemu:///system
I0127 03:16:55.978302 165385 docker.go:132] docker version: linux-20.10.12
I0127 03:16:55.978418 165385 cli_runner.go:133] Run: docker system info --format "{{json .}}"
I0127 03:16:56.080312 165385 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-01-27 03:16:56.009417453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
I0127 03:16:56.080412 165385 docker.go:237] overlay module found
I0127 03:16:56.231287 165385 out.go:176] * Using the docker driver based on user configuration
I0127 03:16:56.231324 165385 start.go:281] selected driver: docker
I0127 03:16:56.231331 165385 start.go:798] validating driver "docker" against <nil>
I0127 03:16:56.231355 165385 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
W0127 03:16:56.231439 165385 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W0127 03:16:56.231462 165385 out.go:241] ! Your cgroup does not allow setting memory.
I0127 03:16:56.330803 165385 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
I0127 03:16:56.331784 165385 cli_runner.go:133] Run: docker system info --format "{{json .}}"
I0127 03:16:56.429478 165385 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-01-27 03:16:56.367138586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1028-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668890624 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
I0127 03:16:56.429661 165385 start_flags.go:288] no existing cluster config was found, will generate one from the flags
I0127 03:16:56.429898 165385 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
I0127 03:16:56.429926 165385 start_flags.go:813] Wait components to verify : map[apiserver:true system_pods:true]
I0127 03:16:56.429948 165385 cni.go:93] Creating CNI manager for ""
I0127 03:16:56.429963 165385 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
I0127 03:16:56.429977 165385 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I0127 03:16:56.429984 165385 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I0127 03:16:56.429991 165385 start_flags.go:297] Found "CNI" CNI - setting NetworkPlugin=cni
I0127 03:16:56.430002 165385 start_flags.go:302] config:
{Name:cert-options-20220127031655-6703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:cert-options-20220127031655-6703 Namespace:default APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSD
omain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
I0127 03:16:56.499919 165385 out.go:176] * Starting control plane node cert-options-20220127031655-6703 in cluster cert-options-20220127031655-6703
I0127 03:16:56.499982 165385 cache.go:120] Beginning downloading kic base image for docker with containerd
I0127 03:16:56.560884 165385 out.go:176] * Pulling base image ...
I0127 03:16:56.560931 165385 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime containerd
I0127 03:16:56.560982 165385 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-containerd-overlay2-amd64.tar.lz4
I0127 03:16:56.560990 165385 cache.go:57] Caching tarball of preloaded images
I0127 03:16:56.561024 165385 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
I0127 03:16:56.561247 165385 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0127 03:16:56.561259 165385 cache.go:60] Finished verifying existence of preloaded tar for v1.23.2 on containerd
I0127 03:16:56.561422 165385 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cert-options-20220127031655-6703/config.json ...
I0127 03:16:56.561445 165385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/cert-options-20220127031655-6703/config.json: {Name:mk46d752be84fefc029f06754d1b0613d9d4a329 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 03:16:56.597158 165385 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
I0127 03:16:56.597178 165385 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
I0127 03:16:56.597186 165385 cache.go:208] Successfully downloaded all kic artifacts
I0127 03:16:56.597215 165385 start.go:313] acquiring machines lock for cert-options-20220127031655-6703: {Name:mk33ad0ba81ca90eb57c18f82e5e773f16dd5558 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 03:16:56.597328 165385 start.go:317] acquired machines lock for "cert-options-20220127031655-6703" in 100.268µs
I0127 03:16:56.597348 165385 start.go:89] Provisioning new machine with config: &{Name:cert-options-20220127031655-6703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:cert-options-20220127031655-6703 Namespace:default APIServerName:minikubeCA API
ServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.23.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8555 KubernetesVersion:v1.23.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 03:16:56.597408 165385 start.go:126] createHost starting for "" (driver="docker")
I0127 03:16:59.143205 163929 ssh_runner.go:195] Run: sudo crictl version
I0127 03:16:59.168000 163929 start.go:471] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.4.12
RuntimeApiVersion: v1alpha2
I0127 03:16:59.168055 163929 ssh_runner.go:195] Run: containerd --version
I0127 03:16:59.186157 163929 ssh_runner.go:195] Run: containerd --version
I0127 03:16:59.302578 163929 out.go:176] * Preparing Kubernetes v1.23.2 on containerd 1.4.12 ...
I0127 03:16:59.302681 163929 cli_runner.go:133] Run: docker network inspect pause-20220127031541-6703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0127 03:16:59.348724 163929 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I0127 03:16:59.399212 163929 out.go:176] - kubelet.housekeeping-interval=5m
I0127 03:16:59.435211 163929 out.go:176] - kubelet.cni-conf-dir=/etc/cni/net.mk
I0127 03:16:59.435322 163929 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime containerd
I0127 03:16:59.435391 163929 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 03:16:59.462310 163929 containerd.go:612] all images are preloaded for containerd runtime.
I0127 03:16:59.462335 163929 containerd.go:526] Images already preloaded, skipping extraction
I0127 03:16:59.462377 163929 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 03:16:59.486838 163929 containerd.go:612] all images are preloaded for containerd runtime.
I0127 03:16:59.486862 163929 cache_images.go:84] Images are preloaded, skipping loading
I0127 03:16:59.486913 163929 ssh_runner.go:195] Run: sudo crictl info
I0127 03:16:59.513759 163929 cni.go:93] Creating CNI manager for ""
I0127 03:16:59.513785 163929 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
I0127 03:16:59.513813 163929 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0127 03:16:59.513830 163929 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220127031541-6703 NodeName:pause-20220127031541-6703 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/l
ib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0127 03:16:59.514018 163929 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "pause-20220127031541-6703"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.23.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0127 03:16:59.514122 163929 kubeadm.go:791] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.23.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=pause-20220127031541-6703 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.23.2 ClusterName:pause-20220127031541-6703 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0127 03:16:59.514194 163929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2
I0127 03:16:59.523550 163929 binaries.go:44] Found k8s binaries, skipping transfer
I0127 03:16:59.523636 163929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0127 03:16:59.532828 163929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (597 bytes)
I0127 03:16:59.548791 163929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0127 03:16:59.561414 163929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
I0127 03:16:59.573976 163929 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I0127 03:16:59.576816 163929 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703 for IP: 192.168.67.2
I0127 03:16:59.576904 163929 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.key
I0127 03:16:59.576938 163929 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.key
I0127 03:16:59.577011 163929 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703/client.key
I0127 03:16:59.577078 163929 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703/apiserver.key.c7fa3a9e
I0127 03:16:59.577134 163929 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703/proxy-client.key
I0127 03:16:59.577241 163929 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/6703.pem (1338 bytes)
W0127 03:16:59.577277 163929 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/6703_empty.pem, impossibly tiny 0 bytes
I0127 03:16:59.577294 163929 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca-key.pem (1675 bytes)
I0127 03:16:59.577333 163929 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem (1078 bytes)
I0127 03:16:59.577364 163929 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem (1123 bytes)
I0127 03:16:59.577403 163929 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/key.pem (1675 bytes)
I0127 03:16:59.577453 163929 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/67032.pem (1708 bytes)
I0127 03:16:59.578391 163929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0127 03:16:59.594769 163929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0127 03:16:59.761796 163929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0127 03:16:59.782293 163929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0127 03:16:59.800687 163929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0127 03:16:59.826545 163929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0127 03:16:59.851809 163929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0127 03:16:59.870503 163929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0127 03:16:59.887991 163929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/67032.pem --> /usr/share/ca-certificates/67032.pem (1708 bytes)
I0127 03:16:59.905075 163929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0127 03:16:59.927010 163929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/6703.pem --> /usr/share/ca-certificates/6703.pem (1338 bytes)
I0127 03:16:59.949845 163929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0127 03:16:59.964096 163929 ssh_runner.go:195] Run: openssl version
I0127 03:16:59.968788 163929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67032.pem && ln -fs /usr/share/ca-certificates/67032.pem /etc/ssl/certs/67032.pem"
I0127 03:16:59.976180 163929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67032.pem
I0127 03:16:59.979511 163929 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 27 02:47 /usr/share/ca-certificates/67032.pem
I0127 03:16:59.979561 163929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67032.pem
I0127 03:16:59.984198 163929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67032.pem /etc/ssl/certs/3ec20f2e.0"
I0127 03:16:59.990730 163929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0127 03:16:59.997606 163929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0127 03:17:00.000453 163929 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 27 02:42 /usr/share/ca-certificates/minikubeCA.pem
I0127 03:17:00.000493 163929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0127 03:17:00.005810 163929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0127 03:17:00.014660 163929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6703.pem && ln -fs /usr/share/ca-certificates/6703.pem /etc/ssl/certs/6703.pem"
I0127 03:17:00.024955 163929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6703.pem
I0127 03:17:00.029138 163929 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 27 02:47 /usr/share/ca-certificates/6703.pem
I0127 03:17:00.029184 163929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6703.pem
I0127 03:17:00.036022 163929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6703.pem /etc/ssl/certs/51391683.0"
I0127 03:17:00.045298 163929 kubeadm.go:388] StartCluster: {Name:pause-20220127031541-6703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:pause-20220127031541-6703 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
I0127 03:17:00.045378 163929 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0127 03:17:00.045471 163929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 03:17:00.074162 163929 cri.go:87] found id: "62b37530a83efcdbc97267529625fe4196f4fd8226687e4695f9b0299eea2846"
I0127 03:17:00.074191 163929 cri.go:87] found id: "886e22ea0e68c7cc285e2aa75fe090255638e01b7e8d164c4aae08f322528822"
I0127 03:17:00.074197 163929 cri.go:87] found id: "1a2313a515d37a8bf98dff383afc9ca3e24f14cc675df33d8bdff8ddc9db275f"
I0127 03:17:00.074201 163929 cri.go:87] found id: "a009159daa09e5f390fb09e4e8f88530528b0c8630aa629642bac57e67ebd9c6"
I0127 03:17:00.074207 163929 cri.go:87] found id: "0a04e5a0eca444f6a23750b80b5b84302571637fe91ecd03a04b5a511e35c8f7"
I0127 03:17:00.074214 163929 cri.go:87] found id: "48c3e32b20f6e9d49fb080085cee5a973df6945da3263ea7851445fd8ac6060f"
I0127 03:17:00.074220 163929 cri.go:87] found id: "eb36e8411c4da7b98212be67c7eb03b5d2015c1495376038fd0a1b4b2bbb389f"
I0127 03:17:00.074232 163929 cri.go:87] found id: ""
I0127 03:17:00.074279 163929 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I0127 03:17:00.109187 163929 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"0a04e5a0eca444f6a23750b80b5b84302571637fe91ecd03a04b5a511e35c8f7","pid":1149,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a04e5a0eca444f6a23750b80b5b84302571637fe91ecd03a04b5a511e35c8f7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a04e5a0eca444f6a23750b80b5b84302571637fe91ecd03a04b5a511e35c8f7/rootfs","created":"2022-01-27T03:16:20.43724908Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"6b73cef3f45943fa1d85f5d44fe24cee72c38afa4faf17a7f242ff7fdb367643"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1176ff589fabea8d6dd7fc71637bc92847a742ea004dce5caa8b71d13ef5f465","pid":2042,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1176ff589fabea8d6dd7fc71637bc92847a742ea004dce5caa8b71d13ef5f465","rootfs":"/run/containerd/io.containerd.runtime.v2.tas
k/k8s.io/1176ff589fabea8d6dd7fc71637bc92847a742ea004dce5caa8b71d13ef5f465/rootfs","created":"2022-01-27T03:16:43.277907757Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1176ff589fabea8d6dd7fc71637bc92847a742ea004dce5caa8b71d13ef5f465","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-64897985d-p2l5j_89d03314-65b0-43ef-85a5-898223c9a84b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1a2313a515d37a8bf98dff383afc9ca3e24f14cc675df33d8bdff8ddc9db275f","pid":1783,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a2313a515d37a8bf98dff383afc9ca3e24f14cc675df33d8bdff8ddc9db275f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a2313a515d37a8bf98dff383afc9ca3e24f14cc675df33d8bdff8ddc9db275f/rootfs","created":"2022-01-27T03:16:41.087444511Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a2679d06f09b6ae
a2e735d247ca7aed40a6ef702e4ddde33c1370a730dae5d67"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2fdea377e1da5d385ae80e959ce719eed69072342752f692d5210ddf1bcfb777","pid":1745,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fdea377e1da5d385ae80e959ce719eed69072342752f692d5210ddf1bcfb777","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fdea377e1da5d385ae80e959ce719eed69072342752f692d5210ddf1bcfb777/rootfs","created":"2022-01-27T03:16:41.015841481Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2fdea377e1da5d385ae80e959ce719eed69072342752f692d5210ddf1bcfb777","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-pkggr_767e367d-723c-45b9-bfbb-0cac37e69288"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"48c3e32b20f6e9d49fb080085cee5a973df6945da3263ea7851445fd8ac6060f","pid":1186,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/48c3e32b20f6e9d49fb080085cee5a973
df6945da3263ea7851445fd8ac6060f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/48c3e32b20f6e9d49fb080085cee5a973df6945da3263ea7851445fd8ac6060f/rootfs","created":"2022-01-27T03:16:20.507418647Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e2d4d42107b82a0af627ea3a0ce53c71738ac7e8ed7d7d1673089a14919d89f0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"62b37530a83efcdbc97267529625fe4196f4fd8226687e4695f9b0299eea2846","pid":2079,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/62b37530a83efcdbc97267529625fe4196f4fd8226687e4695f9b0299eea2846","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/62b37530a83efcdbc97267529625fe4196f4fd8226687e4695f9b0299eea2846/rootfs","created":"2022-01-27T03:16:43.410724588Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":
"1176ff589fabea8d6dd7fc71637bc92847a742ea004dce5caa8b71d13ef5f465"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6b73cef3f45943fa1d85f5d44fe24cee72c38afa4faf17a7f242ff7fdb367643","pid":980,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b73cef3f45943fa1d85f5d44fe24cee72c38afa4faf17a7f242ff7fdb367643","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b73cef3f45943fa1d85f5d44fe24cee72c38afa4faf17a7f242ff7fdb367643/rootfs","created":"2022-01-27T03:16:20.218224243Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"6b73cef3f45943fa1d85f5d44fe24cee72c38afa4faf17a7f242ff7fdb367643","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20220127031541-6703_7d17f37224d53544ee825b6ba1742b7b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"886e22ea0e68c7cc285e2aa75fe090255638e01b7e8d164c4aae08f322528822","pid":1933,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/886e2
2ea0e68c7cc285e2aa75fe090255638e01b7e8d164c4aae08f322528822","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/886e22ea0e68c7cc285e2aa75fe090255638e01b7e8d164c4aae08f322528822/rootfs","created":"2022-01-27T03:16:41.51152493Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"2fdea377e1da5d385ae80e959ce719eed69072342752f692d5210ddf1bcfb777"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8edeaaa7767d3fe8bc23be5c463467acae9a1994931a4ee7a2b729451026e4af","pid":1033,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8edeaaa7767d3fe8bc23be5c463467acae9a1994931a4ee7a2b729451026e4af","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8edeaaa7767d3fe8bc23be5c463467acae9a1994931a4ee7a2b729451026e4af/rootfs","created":"2022-01-27T03:16:20.216006155Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"8edeaaa7767d3fe8bc23be5c463467a
cae9a1994931a4ee7a2b729451026e4af","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20220127031541-6703_6b94ebcc2e7766018aaf230d0e52b9e7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"92ca6214e1c128aa059fb6af146a333389df9ae21b2adb753aa3d2c42c5f5613","pid":1051,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/92ca6214e1c128aa059fb6af146a333389df9ae21b2adb753aa3d2c42c5f5613","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/92ca6214e1c128aa059fb6af146a333389df9ae21b2adb753aa3d2c42c5f5613/rootfs","created":"2022-01-27T03:16:20.239650389Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"92ca6214e1c128aa059fb6af146a333389df9ae21b2adb753aa3d2c42c5f5613","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20220127031541-6703_cf703987ccbca29b3f499b9bc24e460b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a009159daa09e5f390fb09e4e8f88530528b0c863
0aa629642bac57e67ebd9c6","pid":1178,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a009159daa09e5f390fb09e4e8f88530528b0c8630aa629642bac57e67ebd9c6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a009159daa09e5f390fb09e4e8f88530528b0c8630aa629642bac57e67ebd9c6/rootfs","created":"2022-01-27T03:16:20.450881947Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"92ca6214e1c128aa059fb6af146a333389df9ae21b2adb753aa3d2c42c5f5613"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a2679d06f09b6aea2e735d247ca7aed40a6ef702e4ddde33c1370a730dae5d67","pid":1739,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2679d06f09b6aea2e735d247ca7aed40a6ef702e4ddde33c1370a730dae5d67","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2679d06f09b6aea2e735d247ca7aed40a6ef702e4ddde33c1370a730dae5d67/rootfs","created":"2022-01-27T03:16:40.97976883
7Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a2679d06f09b6aea2e735d247ca7aed40a6ef702e4ddde33c1370a730dae5d67","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-2bzj7_ceb4d44f-4872-4268-89b4-adb4c55e0102"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e2d4d42107b82a0af627ea3a0ce53c71738ac7e8ed7d7d1673089a14919d89f0","pid":944,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2d4d42107b82a0af627ea3a0ce53c71738ac7e8ed7d7d1673089a14919d89f0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2d4d42107b82a0af627ea3a0ce53c71738ac7e8ed7d7d1673089a14919d89f0/rootfs","created":"2022-01-27T03:16:20.215772663Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e2d4d42107b82a0af627ea3a0ce53c71738ac7e8ed7d7d1673089a14919d89f0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20220127031541-6703_686f7b6ed
893161c15f363ac43c1128c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"eb36e8411c4da7b98212be67c7eb03b5d2015c1495376038fd0a1b4b2bbb389f","pid":1133,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/eb36e8411c4da7b98212be67c7eb03b5d2015c1495376038fd0a1b4b2bbb389f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/eb36e8411c4da7b98212be67c7eb03b5d2015c1495376038fd0a1b4b2bbb389f/rootfs","created":"2022-01-27T03:16:20.435380134Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"8edeaaa7767d3fe8bc23be5c463467acae9a1994931a4ee7a2b729451026e4af"},"owner":"root"}]
I0127 03:17:00.109414 163929 cri.go:124] list returned 14 containers
I0127 03:17:00.109436 163929 cri.go:127] container: {ID:0a04e5a0eca444f6a23750b80b5b84302571637fe91ecd03a04b5a511e35c8f7 Status:running}
I0127 03:17:00.109453 163929 cri.go:133] skipping {0a04e5a0eca444f6a23750b80b5b84302571637fe91ecd03a04b5a511e35c8f7 running}: state = "running", want "paused"
I0127 03:17:00.109469 163929 cri.go:127] container: {ID:1176ff589fabea8d6dd7fc71637bc92847a742ea004dce5caa8b71d13ef5f465 Status:running}
I0127 03:17:00.109475 163929 cri.go:129] skipping 1176ff589fabea8d6dd7fc71637bc92847a742ea004dce5caa8b71d13ef5f465 - not in ps
I0127 03:17:00.109484 163929 cri.go:127] container: {ID:1a2313a515d37a8bf98dff383afc9ca3e24f14cc675df33d8bdff8ddc9db275f Status:running}
I0127 03:17:00.109489 163929 cri.go:133] skipping {1a2313a515d37a8bf98dff383afc9ca3e24f14cc675df33d8bdff8ddc9db275f running}: state = "running", want "paused"
I0127 03:17:00.109497 163929 cri.go:127] container: {ID:2fdea377e1da5d385ae80e959ce719eed69072342752f692d5210ddf1bcfb777 Status:running}
I0127 03:17:00.109503 163929 cri.go:129] skipping 2fdea377e1da5d385ae80e959ce719eed69072342752f692d5210ddf1bcfb777 - not in ps
I0127 03:17:00.109513 163929 cri.go:127] container: {ID:48c3e32b20f6e9d49fb080085cee5a973df6945da3263ea7851445fd8ac6060f Status:running}
I0127 03:17:00.109519 163929 cri.go:133] skipping {48c3e32b20f6e9d49fb080085cee5a973df6945da3263ea7851445fd8ac6060f running}: state = "running", want "paused"
I0127 03:17:00.109525 163929 cri.go:127] container: {ID:62b37530a83efcdbc97267529625fe4196f4fd8226687e4695f9b0299eea2846 Status:running}
I0127 03:17:00.109531 163929 cri.go:133] skipping {62b37530a83efcdbc97267529625fe4196f4fd8226687e4695f9b0299eea2846 running}: state = "running", want "paused"
I0127 03:17:00.109536 163929 cri.go:127] container: {ID:6b73cef3f45943fa1d85f5d44fe24cee72c38afa4faf17a7f242ff7fdb367643 Status:running}
I0127 03:17:00.109542 163929 cri.go:129] skipping 6b73cef3f45943fa1d85f5d44fe24cee72c38afa4faf17a7f242ff7fdb367643 - not in ps
I0127 03:17:00.109546 163929 cri.go:127] container: {ID:886e22ea0e68c7cc285e2aa75fe090255638e01b7e8d164c4aae08f322528822 Status:running}
I0127 03:17:00.109552 163929 cri.go:133] skipping {886e22ea0e68c7cc285e2aa75fe090255638e01b7e8d164c4aae08f322528822 running}: state = "running", want "paused"
I0127 03:17:00.109557 163929 cri.go:127] container: {ID:8edeaaa7767d3fe8bc23be5c463467acae9a1994931a4ee7a2b729451026e4af Status:running}
I0127 03:17:00.109563 163929 cri.go:129] skipping 8edeaaa7767d3fe8bc23be5c463467acae9a1994931a4ee7a2b729451026e4af - not in ps
I0127 03:17:00.109567 163929 cri.go:127] container: {ID:92ca6214e1c128aa059fb6af146a333389df9ae21b2adb753aa3d2c42c5f5613 Status:running}
I0127 03:17:00.109572 163929 cri.go:129] skipping 92ca6214e1c128aa059fb6af146a333389df9ae21b2adb753aa3d2c42c5f5613 - not in ps
I0127 03:17:00.109575 163929 cri.go:127] container: {ID:a009159daa09e5f390fb09e4e8f88530528b0c8630aa629642bac57e67ebd9c6 Status:running}
I0127 03:17:00.109579 163929 cri.go:133] skipping {a009159daa09e5f390fb09e4e8f88530528b0c8630aa629642bac57e67ebd9c6 running}: state = "running", want "paused"
I0127 03:17:00.109582 163929 cri.go:127] container: {ID:a2679d06f09b6aea2e735d247ca7aed40a6ef702e4ddde33c1370a730dae5d67 Status:running}
I0127 03:17:00.109587 163929 cri.go:129] skipping a2679d06f09b6aea2e735d247ca7aed40a6ef702e4ddde33c1370a730dae5d67 - not in ps
I0127 03:17:00.109592 163929 cri.go:127] container: {ID:e2d4d42107b82a0af627ea3a0ce53c71738ac7e8ed7d7d1673089a14919d89f0 Status:running}
I0127 03:17:00.109598 163929 cri.go:129] skipping e2d4d42107b82a0af627ea3a0ce53c71738ac7e8ed7d7d1673089a14919d89f0 - not in ps
I0127 03:17:00.109602 163929 cri.go:127] container: {ID:eb36e8411c4da7b98212be67c7eb03b5d2015c1495376038fd0a1b4b2bbb389f Status:running}
I0127 03:17:00.109610 163929 cri.go:133] skipping {eb36e8411c4da7b98212be67c7eb03b5d2015c1495376038fd0a1b4b2bbb389f running}: state = "running", want "paused"
I0127 03:17:00.109652 163929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0127 03:17:00.118823 163929 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0127 03:17:00.126080 163929 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0127 03:17:00.127173 163929 kubeconfig.go:92] found "pause-20220127031541-6703" server: "https://192.168.67.2:8443"
I0127 03:17:00.128285 163929 kapi.go:59] client config for pause-20220127031541-6703: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703
/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x15da7e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0127 03:17:00.130207 163929 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0127 03:17:00.139543 163929 api_server.go:165] Checking apiserver status ...
I0127 03:17:00.139593 163929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 03:17:00.160949 163929 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1133/cgroup
I0127 03:17:00.169913 163929 api_server.go:181] apiserver freezer: "8:freezer:/docker/e76adc378ad0c84d6d40b961bf5e80f9896e30608935373326ce9025e8a4ab01/kubepods/burstable/pod6b94ebcc2e7766018aaf230d0e52b9e7/eb36e8411c4da7b98212be67c7eb03b5d2015c1495376038fd0a1b4b2bbb389f"
I0127 03:17:00.169978 163929 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e76adc378ad0c84d6d40b961bf5e80f9896e30608935373326ce9025e8a4ab01/kubepods/burstable/pod6b94ebcc2e7766018aaf230d0e52b9e7/eb36e8411c4da7b98212be67c7eb03b5d2015c1495376038fd0a1b4b2bbb389f/freezer.state
I0127 03:17:00.176622 163929 api_server.go:203] freezer state: "THAWED"
I0127 03:17:00.176654 163929 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0127 03:17:00.181550 163929 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
ok
I0127 03:17:00.199322 163929 system_pods.go:86] 7 kube-system pods found
I0127 03:17:00.199353 163929 system_pods.go:89] "coredns-64897985d-p2l5j" [89d03314-65b0-43ef-85a5-898223c9a84b] Running
I0127 03:17:00.199362 163929 system_pods.go:89] "etcd-pause-20220127031541-6703" [44237bcb-32c0-47b7-959b-d600f9c50922] Running
I0127 03:17:00.199370 163929 system_pods.go:89] "kindnet-pkggr" [767e367d-723c-45b9-bfbb-0cac37e69288] Running
I0127 03:17:00.199377 163929 system_pods.go:89] "kube-apiserver-pause-20220127031541-6703" [38acbadd-7b65-4bf9-b495-0fa85acf147c] Running
I0127 03:17:00.199388 163929 system_pods.go:89] "kube-controller-manager-pause-20220127031541-6703" [08c12ca6-8b0e-4439-9a2d-e804a2950199] Running
I0127 03:17:00.199397 163929 system_pods.go:89] "kube-proxy-2bzj7" [ceb4d44f-4872-4268-89b4-adb4c55e0102] Running
I0127 03:17:00.199403 163929 system_pods.go:89] "kube-scheduler-pause-20220127031541-6703" [20d1efc5-aaf3-4c4a-9e73-6ddad3b56191] Running
I0127 03:17:00.201052 163929 api_server.go:140] control plane version: v1.23.2
I0127 03:17:00.201079 163929 kubeadm.go:618] The running cluster does not require reconfiguration: 192.168.67.2
I0127 03:17:00.201087 163929 kubeadm.go:390] StartCluster complete in 155.792656ms
I0127 03:17:00.201105 163929 settings.go:142] acquiring lock: {Name:mkfac99b88cf5519bc3b0da9d34ba6bc12584830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 03:17:00.201197 163929 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
I0127 03:17:00.201935 163929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig: {Name:mk52def711e0760588c8e7c9e046110fe006e484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 03:17:00.202698 163929 kapi.go:59] client config for pause-20220127031541-6703: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703
/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x15da7e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0127 03:17:00.207898 163929 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220127031541-6703" rescaled to 1
I0127 03:17:00.207957 163929 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 03:16:56.666432 165385 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
I0127 03:16:56.666735 165385 start.go:160] libmachine.API.Create for "cert-options-20220127031655-6703" (driver="docker")
I0127 03:16:56.666763 165385 client.go:168] LocalClient.Create starting
I0127 03:16:56.666846 165385 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem
I0127 03:16:56.666877 165385 main.go:130] libmachine: Decoding PEM data...
I0127 03:16:56.666889 165385 main.go:130] libmachine: Parsing certificate...
I0127 03:16:56.666950 165385 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem
I0127 03:16:56.666965 165385 main.go:130] libmachine: Decoding PEM data...
I0127 03:16:56.666972 165385 main.go:130] libmachine: Parsing certificate...
I0127 03:16:56.667320 165385 cli_runner.go:133] Run: docker network inspect cert-options-20220127031655-6703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0127 03:16:56.698737 165385 cli_runner.go:180] docker network inspect cert-options-20220127031655-6703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0127 03:16:56.698793 165385 network_create.go:254] running [docker network inspect cert-options-20220127031655-6703] to gather additional debugging logs...
I0127 03:16:56.698809 165385 cli_runner.go:133] Run: docker network inspect cert-options-20220127031655-6703
W0127 03:16:56.737360 165385 cli_runner.go:180] docker network inspect cert-options-20220127031655-6703 returned with exit code 1
I0127 03:16:56.737384 165385 network_create.go:257] error running [docker network inspect cert-options-20220127031655-6703]: docker network inspect cert-options-20220127031655-6703: exit status 1
stdout:
[]
stderr:
Error: No such network: cert-options-20220127031655-6703
I0127 03:16:56.737406 165385 network_create.go:259] output of [docker network inspect cert-options-20220127031655-6703]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: cert-options-20220127031655-6703
** /stderr **
I0127 03:16:56.737460 165385 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0127 03:16:56.774995 165385 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000010df0] misses:0}
I0127 03:16:56.775031 165385 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0127 03:16:56.775047 165385 network_create.go:106] attempt to create docker network cert-options-20220127031655-6703 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0127 03:16:56.775085 165385 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220127031655-6703
I0127 03:16:56.957687 165385 network_create.go:90] docker network cert-options-20220127031655-6703 192.168.49.0/24 created
I0127 03:16:56.957713 165385 kic.go:106] calculated static IP "192.168.49.2" for the "cert-options-20220127031655-6703" container
I0127 03:16:56.957774 165385 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
I0127 03:16:56.992057 165385 cli_runner.go:133] Run: docker volume create cert-options-20220127031655-6703 --label name.minikube.sigs.k8s.io=cert-options-20220127031655-6703 --label created_by.minikube.sigs.k8s.io=true
I0127 03:16:57.032349 165385 oci.go:102] Successfully created a docker volume cert-options-20220127031655-6703
I0127 03:16:57.032419 165385 cli_runner.go:133] Run: docker run --rm --name cert-options-20220127031655-6703-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20220127031655-6703 --entrypoint /usr/bin/test -v cert-options-20220127031655-6703:/var gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib
I0127 03:17:00.257241 163929 out.go:176] * Verifying Kubernetes components...
I0127 03:17:00.257333 163929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 03:17:00.208163 163929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0127 03:17:00.208193 163929 addons.go:415] enableAddons start: toEnable=map[], additional=[]
I0127 03:17:00.208339 163929 config.go:176] Loaded profile config "pause-20220127031541-6703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2
I0127 03:17:00.257504 163929 addons.go:65] Setting storage-provisioner=true in profile "pause-20220127031541-6703"
I0127 03:17:00.257527 163929 addons.go:153] Setting addon storage-provisioner=true in "pause-20220127031541-6703"
W0127 03:17:00.257535 163929 addons.go:165] addon storage-provisioner should already be in state true
I0127 03:17:00.257540 163929 addons.go:65] Setting default-storageclass=true in profile "pause-20220127031541-6703"
I0127 03:17:00.257561 163929 host.go:66] Checking if "pause-20220127031541-6703" exists ...
I0127 03:17:00.257562 163929 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220127031541-6703"
I0127 03:17:00.257902 163929 cli_runner.go:133] Run: docker container inspect pause-20220127031541-6703 --format={{.State.Status}}
I0127 03:17:00.258087 163929 cli_runner.go:133] Run: docker container inspect pause-20220127031541-6703 --format={{.State.Status}}
I0127 03:17:00.320738 163929 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 03:17:00.320899 163929 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 03:17:00.320912 163929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 03:17:00.320965 163929 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220127031541-6703
I0127 03:17:00.327928 163929 kapi.go:59] client config for pause-20220127031541-6703: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/pause-20220127031541-6703
/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x15da7e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0127 03:17:00.336634 163929 addons.go:153] Setting addon default-storageclass=true in "pause-20220127031541-6703"
W0127 03:17:00.336659 163929 addons.go:165] addon default-storageclass should already be in state true
I0127 03:17:00.336691 163929 host.go:66] Checking if "pause-20220127031541-6703" exists ...
I0127 03:17:00.337224 163929 cli_runner.go:133] Run: docker container inspect pause-20220127031541-6703 --format={{.State.Status}}
I0127 03:17:00.347472 163929 start.go:757] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0127 03:17:00.347539 163929 node_ready.go:35] waiting up to 6m0s for node "pause-20220127031541-6703" to be "Ready" ...
I0127 03:17:00.354716 163929 node_ready.go:49] node "pause-20220127031541-6703" has status "Ready":"True"
I0127 03:17:00.354738 163929 node_ready.go:38] duration metric: took 7.178279ms waiting for node "pause-20220127031541-6703" to be "Ready" ...
I0127 03:17:00.354748 163929 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 03:17:00.361251 163929 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-p2l5j" in "kube-system" namespace to be "Ready" ...
I0127 03:17:00.381991 163929 pod_ready.go:92] pod "coredns-64897985d-p2l5j" in "kube-system" namespace has status "Ready":"True"
I0127 03:17:00.382013 163929 pod_ready.go:81] duration metric: took 20.7292ms waiting for pod "coredns-64897985d-p2l5j" in "kube-system" namespace to be "Ready" ...
I0127 03:17:00.382026 163929 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220127031541-6703" in "kube-system" namespace to be "Ready" ...
I0127 03:17:00.391564 163929 pod_ready.go:92] pod "etcd-pause-20220127031541-6703" in "kube-system" namespace has status "Ready":"True"
I0127 03:17:00.391584 163929 pod_ready.go:81] duration metric: took 9.55048ms waiting for pod "etcd-pause-20220127031541-6703" in "kube-system" namespace to be "Ready" ...
I0127 03:17:00.391602 163929 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220127031541-6703" in "kube-system" namespace to be "Ready" ...
I0127 03:17:00.397028 163929 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
I0127 03:17:00.397048 163929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 03:17:00.397101 163929 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220127031541-6703
I0127 03:17:00.398243 163929 pod_ready.go:92] pod "kube-apiserver-pause-20220127031541-6703" in "kube-system" namespace has status "Ready":"True"
I0127 03:17:00.398269 163929 pod_ready.go:81] duration metric: took 6.658839ms waiting for pod "kube-apiserver-pause-20220127031541-6703" in "kube-system" namespace to be "Ready" ...
I0127 03:17:00.398281 163929 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220127031541-6703" in "kube-system" namespace to be "Ready" ...
I0127 03:17:00.401911 163929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49338 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/pause-20220127031541-6703/id_rsa Username:docker}
I0127 03:17:00.402986 163929 pod_ready.go:92] pod "kube-controller-manager-pause-20220127031541-6703" in "kube-system" namespace has status "Ready":"True"
I0127 03:17:00.403013 163929 pod_ready.go:81] duration metric: took 4.721924ms waiting for pod "kube-controller-manager-pause-20220127031541-6703" in "kube-system" namespace to be "Ready" ...
I0127 03:17:00.403026 163929 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2bzj7" in "kube-system" namespace to be "Ready" ...
I0127 03:17:00.457799 163929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49338 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/machines/pause-20220127031541-6703/id_rsa Username:docker}
I0127 03:17:00.532049 163929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 03:17:00.623174 163929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 03:17:00.785143 163929 pod_ready.go:92] pod "kube-proxy-2bzj7" in "kube-system" namespace has status "Ready":"True"
I0127 03:17:00.785205 163929 pod_ready.go:81] duration metric: took 382.170785ms waiting for pod "kube-proxy-2bzj7" in "kube-system" namespace to be "Ready" ...
I0127 03:17:00.785224 163929 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220127031541-6703" in "kube-system" namespace to be "Ready" ...
I0127 03:17:00.906772 163929 out.go:176] * Enabled addons: default-storageclass, storage-provisioner
I0127 03:17:00.906806 163929 addons.go:417] enableAddons completed in 698.622737ms
I0127 03:17:01.184657 163929 pod_ready.go:92] pod "kube-scheduler-pause-20220127031541-6703" in "kube-system" namespace has status "Ready":"True"
I0127 03:17:01.184683 163929 pod_ready.go:81] duration metric: took 399.447468ms waiting for pod "kube-scheduler-pause-20220127031541-6703" in "kube-system" namespace to be "Ready" ...
I0127 03:17:01.184691 163929 pod_ready.go:38] duration metric: took 829.929253ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 03:17:01.184708 163929 api_server.go:51] waiting for apiserver process to appear ...
I0127 03:17:01.184740 163929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 03:17:01.209882 163929 api_server.go:71] duration metric: took 1.001891531s to wait for apiserver process to appear ...
I0127 03:17:01.209974 163929 api_server.go:87] waiting for apiserver healthz status ...
I0127 03:17:01.210002 163929 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0127 03:17:01.220672 163929 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
ok
I0127 03:17:01.221639 163929 api_server.go:140] control plane version: v1.23.2
I0127 03:17:01.221661 163929 api_server.go:130] duration metric: took 11.668115ms to wait for apiserver health ...
I0127 03:17:01.221670 163929 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 03:17:01.387956 163929 system_pods.go:59] 8 kube-system pods found
I0127 03:17:01.387993 163929 system_pods.go:61] "coredns-64897985d-p2l5j" [89d03314-65b0-43ef-85a5-898223c9a84b] Running
I0127 03:17:01.388002 163929 system_pods.go:61] "etcd-pause-20220127031541-6703" [44237bcb-32c0-47b7-959b-d600f9c50922] Running
I0127 03:17:01.388008 163929 system_pods.go:61] "kindnet-pkggr" [767e367d-723c-45b9-bfbb-0cac37e69288] Running
I0127 03:17:01.388014 163929 system_pods.go:61] "kube-apiserver-pause-20220127031541-6703" [38acbadd-7b65-4bf9-b495-0fa85acf147c] Running
I0127 03:17:01.388021 163929 system_pods.go:61] "kube-controller-manager-pause-20220127031541-6703" [08c12ca6-8b0e-4439-9a2d-e804a2950199] Running
I0127 03:17:01.388028 163929 system_pods.go:61] "kube-proxy-2bzj7" [ceb4d44f-4872-4268-89b4-adb4c55e0102] Running
I0127 03:17:01.388034 163929 system_pods.go:61] "kube-scheduler-pause-20220127031541-6703" [20d1efc5-aaf3-4c4a-9e73-6ddad3b56191] Running
I0127 03:17:01.388045 163929 system_pods.go:61] "storage-provisioner" [b5e19d29-d637-4733-bc02-57d96df8234e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0127 03:17:01.388061 163929 system_pods.go:74] duration metric: took 166.374613ms to wait for pod list to return data ...
I0127 03:17:01.388069 163929 default_sa.go:34] waiting for default service account to be created ...
I0127 03:17:01.585705 163929 default_sa.go:45] found service account: "default"
I0127 03:17:01.585732 163929 default_sa.go:55] duration metric: took 197.656465ms for default service account to be created ...
I0127 03:17:01.585741 163929 system_pods.go:116] waiting for k8s-apps to be running ...
I0127 03:17:01.787369 163929 system_pods.go:86] 8 kube-system pods found
I0127 03:17:01.787397 163929 system_pods.go:89] "coredns-64897985d-p2l5j" [89d03314-65b0-43ef-85a5-898223c9a84b] Running
I0127 03:17:01.787403 163929 system_pods.go:89] "etcd-pause-20220127031541-6703" [44237bcb-32c0-47b7-959b-d600f9c50922] Running
I0127 03:17:01.787407 163929 system_pods.go:89] "kindnet-pkggr" [767e367d-723c-45b9-bfbb-0cac37e69288] Running
I0127 03:17:01.787411 163929 system_pods.go:89] "kube-apiserver-pause-20220127031541-6703" [38acbadd-7b65-4bf9-b495-0fa85acf147c] Running
I0127 03:17:01.787416 163929 system_pods.go:89] "kube-controller-manager-pause-20220127031541-6703" [08c12ca6-8b0e-4439-9a2d-e804a2950199] Running
I0127 03:17:01.787421 163929 system_pods.go:89] "kube-proxy-2bzj7" [ceb4d44f-4872-4268-89b4-adb4c55e0102] Running
I0127 03:17:01.787428 163929 system_pods.go:89] "kube-scheduler-pause-20220127031541-6703" [20d1efc5-aaf3-4c4a-9e73-6ddad3b56191] Running
I0127 03:17:01.787439 163929 system_pods.go:89] "storage-provisioner" [b5e19d29-d637-4733-bc02-57d96df8234e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0127 03:17:01.787461 163929 system_pods.go:126] duration metric: took 201.716625ms to wait for k8s-apps to be running ...
I0127 03:17:01.787468 163929 system_svc.go:44] waiting for kubelet service to be running ....
I0127 03:17:01.787506 163929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 03:17:01.798574 163929 system_svc.go:56] duration metric: took 11.096908ms WaitForService to wait for kubelet.
I0127 03:17:01.798602 163929 kubeadm.go:542] duration metric: took 1.590617981s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0127 03:17:01.798626 163929 node_conditions.go:102] verifying NodePressure condition ...
I0127 03:17:01.986365 163929 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
I0127 03:17:01.986391 163929 node_conditions.go:123] node cpu capacity is 8
I0127 03:17:01.986403 163929 node_conditions.go:105] duration metric: took 187.772865ms to run NodePressure ...
I0127 03:17:01.986412 163929 start.go:213] waiting for startup goroutines ...
I0127 03:17:02.148770 163929 start.go:496] kubectl: 1.23.3, cluster: 1.23.2 (minor skew: 0)
I0127 03:17:02.184464 163929 out.go:176] * Done! kubectl is now configured to use "pause-20220127031541-6703" cluster and "default" namespace by default
I0127 03:17:00.950715 165385 cli_runner.go:186] Completed: docker run --rm --name cert-options-20220127031655-6703-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20220127031655-6703 --entrypoint /usr/bin/test -v cert-options-20220127031655-6703:/var gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib: (3.918253388s)
I0127 03:17:00.950737 165385 oci.go:106] Successfully prepared a docker volume cert-options-20220127031655-6703
I0127 03:17:00.950779 165385 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime containerd
I0127 03:17:00.950798 165385 kic.go:179] Starting extracting preloaded images to volume ...
I0127 03:17:00.950849 165385 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-options-20220127031655-6703:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir
I0127 03:17:03.570053 162215 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (11.223111505s)
I0127 03:17:03.570082 162215 containerd.go:562] Took 11.223225 seconds t extract the tarball
I0127 03:17:03.570093 162215 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0127 03:17:03.660335 162215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 03:17:03.871641 162215 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0127 03:17:04.105253 162215 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 03:17:04.124625 162215 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.20.0 k8s.gcr.io/kube-controller-manager:v1.20.0 k8s.gcr.io/kube-scheduler:v1.20.0 k8s.gcr.io/kube-proxy:v1.20.0 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.3.1 docker.io/kubernetesui/metrics-scraper:v1.0.7]
I0127 03:17:04.124734 162215 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7
I0127 03:17:04.124936 162215 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.20.0
I0127 03:17:04.125040 162215 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.20.0
I0127 03:17:04.125150 162215 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.20.0
I0127 03:17:04.125243 162215 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.20.0
I0127 03:17:04.125465 162215 image.go:134] retrieving image: k8s.gcr.io/pause:3.2
I0127 03:17:04.125633 162215 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.13-0
I0127 03:17:04.125741 162215 image.go:134] retrieving image: k8s.gcr.io/coredns:1.7.0
I0127 03:17:04.125832 162215 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0127 03:17:04.125921 162215 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1
I0127 03:17:04.127466 162215 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: reference does not exist
I0127 03:17:04.127977 162215 image.go:180] daemon lookup for docker.io/kubernetesui/dashboard:v2.3.1: Error response from daemon: reference does not exist
I0127 03:17:04.128109 162215 image.go:180] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: Error response from daemon: reference does not exist
I0127 03:17:04.128136 162215 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.4.13-0: Error response from daemon: reference does not exist
I0127 03:17:04.128278 162215 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.20.0: Error response from daemon: reference does not exist
I0127 03:17:04.128413 162215 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.20.0: Error response from daemon: reference does not exist
I0127 03:17:04.128440 162215 image.go:180] daemon lookup for k8s.gcr.io/pause:3.2: Error response from daemon: reference does not exist
I0127 03:17:04.128545 162215 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.20.0: Error response from daemon: reference does not exist
I0127 03:17:04.128565 162215 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.20.0: Error response from daemon: reference does not exist
I0127 03:17:04.128669 162215 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.7.0: Error response from daemon: reference does not exist
I0127 03:17:04.418344 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.20.0"
I0127 03:17:04.419139 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.20.0"
I0127 03:17:04.424426 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.4.13-0"
I0127 03:17:04.427553 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns:1.7.0"
I0127 03:17:04.443893 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.20.0"
I0127 03:17:04.444493 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.20.0"
I0127 03:17:04.465910 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
I0127 03:17:04.509961 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.2"
I0127 03:17:05.015592 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/metrics-scraper:v1.0.7"
I0127 03:17:05.022587 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/dashboard:v2.3.1"
I0127 03:17:05.309789 162215 cache_images.go:123] Successfully loaded all cached images
I0127 03:17:05.309814 162215 cache_images.go:92] LoadImages completed in 1.185159822s
I0127 03:17:05.309874 162215 ssh_runner.go:195] Run: sudo crictl info
I0127 03:17:05.330052 162215 cni.go:93] Creating CNI manager for ""
I0127 03:17:05.330072 162215 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
I0127 03:17:05.330083 162215 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0127 03:17:05.330095 162215 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.59.48 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-20220127031538-6703 NodeName:running-upgrade-20220127031538-6703 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.59.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.59.48 CgroupDriver:cgrou
pfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0127 03:17:05.330263 162215 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.59.48
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "running-upgrade-20220127031538-6703"
kubeletExtraArgs:
node-ip: 192.168.59.48
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.59.48"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0127 03:17:05.330371 162215 kubeadm.go:791] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=running-upgrade-20220127031538-6703 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.59.48 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:running-upgrade-20220127031538-6703 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0127 03:17:05.330429 162215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0127 03:17:05.339921 162215 binaries.go:44] Found k8s binaries, skipping transfer
I0127 03:17:05.339997 162215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0127 03:17:05.347885 162215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (581 bytes)
I0127 03:17:05.363235 162215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0127 03:17:05.408320 162215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2088 bytes)
I0127 03:17:05.432119 162215 ssh_runner.go:195] Run: grep 192.168.59.48 control-plane.minikube.internal$ /etc/hosts
I0127 03:17:05.436673 162215 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703 for IP: 192.168.59.48
I0127 03:17:05.436796 162215 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.key
I0127 03:17:05.436850 162215 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.key
I0127 03:17:05.436973 162215 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/client.key
I0127 03:17:05.437053 162215 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/apiserver.key.fc40ab25
I0127 03:17:05.437109 162215 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/proxy-client.key
I0127 03:17:05.437225 162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/6703.pem (1338 bytes)
W0127 03:17:05.437268 162215 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/6703_empty.pem, impossibly tiny 0 bytes
I0127 03:17:05.437284 162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca-key.pem (1675 bytes)
I0127 03:17:05.437316 162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/ca.pem (1078 bytes)
I0127 03:17:05.437342 162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/cert.pem (1123 bytes)
I0127 03:17:05.437364 162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/key.pem (1675 bytes)
I0127 03:17:05.437417 162215 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/67032.pem (1708 bytes)
I0127 03:17:05.438513 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0127 03:17:05.461440 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0127 03:17:05.516402 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0127 03:17:05.537977 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0127 03:17:05.561369 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0127 03:17:05.629987 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0127 03:17:05.657572 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0127 03:17:05.724754 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0127 03:17:05.808160 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/files/etc/ssl/certs/67032.pem --> /usr/share/ca-certificates/67032.pem (1708 bytes)
I0127 03:17:05.914815 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0127 03:17:05.937255 162215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/certs/6703.pem --> /usr/share/ca-certificates/6703.pem (1338 bytes)
I0127 03:17:05.960381 162215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0127 03:17:06.036900 162215 ssh_runner.go:195] Run: openssl version
I0127 03:17:06.042924 162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67032.pem && ln -fs /usr/share/ca-certificates/67032.pem /etc/ssl/certs/67032.pem"
I0127 03:17:06.064802 162215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67032.pem
I0127 03:17:06.068353 162215 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 27 02:47 /usr/share/ca-certificates/67032.pem
I0127 03:17:06.068400 162215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67032.pem
I0127 03:17:06.074066 162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67032.pem /etc/ssl/certs/3ec20f2e.0"
I0127 03:17:06.104715 162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0127 03:17:06.112573 162215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0127 03:17:06.115981 162215 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 27 02:42 /usr/share/ca-certificates/minikubeCA.pem
I0127 03:17:06.116036 162215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0127 03:17:06.120843 162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0127 03:17:06.127821 162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6703.pem && ln -fs /usr/share/ca-certificates/6703.pem /etc/ssl/certs/6703.pem"
I0127 03:17:06.136027 162215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6703.pem
I0127 03:17:06.139199 162215 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 27 02:47 /usr/share/ca-certificates/6703.pem
I0127 03:17:06.139245 162215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6703.pem
I0127 03:17:06.144267 162215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6703.pem /etc/ssl/certs/51391683.0"
I0127 03:17:06.151276 162215 kubeadm.go:388] StartCluster: {Name:running-upgrade-20220127031538-6703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:running-upgrade-20220127031538-6703 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.48 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false}
I0127 03:17:06.151365 162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0127 03:17:06.151395 162215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 03:17:06.167608 162215 cri.go:87] found id: ""
I0127 03:17:06.167655 162215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0127 03:17:06.207586 162215 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0127 03:17:06.215687 162215 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0127 03:17:06.216403 162215 kubeconfig.go:116] verify returned: extract IP: "running-upgrade-20220127031538-6703" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig
I0127 03:17:06.216619 162215 kubeconfig.go:127] "running-upgrade-20220127031538-6703" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig - will repair!
I0127 03:17:06.217195 162215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/kubeconfig: {Name:mk52def711e0760588c8e7c9e046110fe006e484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 03:17:06.241403 162215 kapi.go:59] client config for running-upgrade-20220127031538-6703: &rest.Config{Host:"https://192.168.59.48:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/running-upgrade-20220127031538-6703/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/profiles/runn
ing-upgrade-20220127031538-6703/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-3373-c4800a61159ffc3ce43d26d0a2acbbe0889dab73/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x15da7e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0127 03:17:06.243260 162215 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0127 03:17:06.251876 162215 kubeadm.go:593] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml 2022-01-27 03:16:10.898540450 +0000
+++ /var/tmp/minikube/kubeadm.yaml.new 2022-01-27 03:17:05.423671678 +0000
@@ -65,4 +65,10 @@
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
-metricsBindAddress: 192.168.59.48:10249
+metricsBindAddress: 0.0.0.0:10249
+conntrack:
+ maxPerCore: 0
+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
+ tcpEstablishedTimeout: 0s
+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
+ tcpCloseWaitTimeout: 0s
-- /stdout --
I0127 03:17:06.251922 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0127 03:17:06.932139 162215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 03:17:06.943350 162215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 03:17:06.959832 162215 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
I0127 03:17:06.959882 162215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 03:17:06.968121 162215 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 03:17:06.968171 162215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
W0127 03:17:07.446011 162215 out.go:241] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.11.0-1028-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.11.0-1028-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-8443]: Port 8443 is in use
[ERROR Port-10259]: Port 10259 is in use
[ERROR Port-10257]: Port 10257 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
I0127 03:17:07.446055 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0127 03:17:07.512319 162215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 03:17:07.522160 162215 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
I0127 03:17:07.522213 162215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 03:17:07.529324 162215 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 03:17:07.529370 162215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0127 03:17:07.690870 162215 kubeadm.go:390] StartCluster complete in 1.539598545s
I0127 03:17:07.690939 162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0127 03:17:07.690986 162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0127 03:17:07.704611 162215 cri.go:87] found id: ""
I0127 03:17:07.704637 162215 logs.go:274] 0 containers: []
W0127 03:17:07.704645 162215 logs.go:276] No container was found matching "kube-apiserver"
I0127 03:17:07.704668 162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0127 03:17:07.704745 162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0127 03:17:07.717933 162215 cri.go:87] found id: ""
I0127 03:17:07.717961 162215 logs.go:274] 0 containers: []
W0127 03:17:07.717971 162215 logs.go:276] No container was found matching "etcd"
I0127 03:17:07.717979 162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0127 03:17:07.718026 162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0127 03:17:07.731059 162215 cri.go:87] found id: ""
I0127 03:17:07.731079 162215 logs.go:274] 0 containers: []
W0127 03:17:07.731085 162215 logs.go:276] No container was found matching "coredns"
I0127 03:17:07.731090 162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0127 03:17:07.731152 162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0127 03:17:07.745381 162215 cri.go:87] found id: ""
I0127 03:17:07.745402 162215 logs.go:274] 0 containers: []
W0127 03:17:07.745408 162215 logs.go:276] No container was found matching "kube-scheduler"
I0127 03:17:07.745417 162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0127 03:17:07.745455 162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0127 03:17:07.762094 162215 cri.go:87] found id: ""
I0127 03:17:07.762125 162215 logs.go:274] 0 containers: []
W0127 03:17:07.762133 162215 logs.go:276] No container was found matching "kube-proxy"
I0127 03:17:07.762142 162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0127 03:17:07.762183 162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0127 03:17:07.775553 162215 cri.go:87] found id: ""
I0127 03:17:07.775580 162215 logs.go:274] 0 containers: []
W0127 03:17:07.775586 162215 logs.go:276] No container was found matching "kubernetes-dashboard"
I0127 03:17:07.775591 162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0127 03:17:07.775638 162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0127 03:17:07.789738 162215 cri.go:87] found id: ""
I0127 03:17:07.789766 162215 logs.go:274] 0 containers: []
W0127 03:17:07.789774 162215 logs.go:276] No container was found matching "storage-provisioner"
I0127 03:17:07.789782 162215 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0127 03:17:07.789830 162215 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0127 03:17:07.803044 162215 cri.go:87] found id: ""
I0127 03:17:07.803071 162215 logs.go:274] 0 containers: []
W0127 03:17:07.803078 162215 logs.go:276] No container was found matching "kube-controller-manager"
I0127 03:17:07.803086 162215 logs.go:123] Gathering logs for kubelet ...
I0127 03:17:07.803117 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0127 03:17:07.895271 162215 logs.go:123] Gathering logs for dmesg ...
I0127 03:17:07.895305 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0127 03:17:07.915018 162215 logs.go:123] Gathering logs for describe nodes ...
I0127 03:17:07.915058 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0127 03:17:08.197863 162215 logs.go:123] Gathering logs for containerd ...
I0127 03:17:08.197892 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0127 03:17:08.257172 162215 logs.go:123] Gathering logs for container status ...
I0127 03:17:08.257213 162215 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0127 03:17:08.275508 162215 out.go:370] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.11.0-1028-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.11.0-1028-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-8443]: Port 8443 is in use
[ERROR Port-10259]: Port 10259 is in use
[ERROR Port-10257]: Port 10257 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W0127 03:17:08.275548 162215 out.go:241] *
W0127 03:17:08.275688 162215 out.go:241] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.11.0-1028-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.11.0-1028-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-8443]: Port 8443 is in use
[ERROR Port-10259]: Port 10259 is in use
[ERROR Port-10257]: Port 10257 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W0127 03:17:08.275702 162215 out.go:241] *
W0127 03:17:08.276469 162215 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0127 03:17:08.396612 162215 out.go:176]
W0127 03:17:08.396806 162215 out.go:241] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.11.0-1028-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.11.0-1028-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-8443]: Port 8443 is in use
[ERROR Port-10259]: Port 10259 is in use
[ERROR Port-10257]: Port 10257 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W0127 03:17:08.396919 162215 out.go:241] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
W0127 03:17:08.396990 162215 out.go:241] * Related issue: https://github.com/kubernetes/minikube/issues/5484
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
*
* ==> containerd <==
* -- Logs begin at Thu 2022-01-27 03:15:48 UTC, end at Thu 2022-01-27 03:17:10 UTC. --
Jan 27 03:17:04 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:04.145767423Z" level=info msg="Start streaming server"
Jan 27 03:17:05 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:05.643542857Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-d9kxz,Uid:7573f936-998f-42ea-834e-ae5675f3e07d,Namespace:kube-system,Attempt:0,}"
Jan 27 03:17:05 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:05.644142734Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-g9t9p,Uid:1488662f-9013-40e3-bbf5-e3fafc03bffc,Namespace:kube-system,Attempt:0,}"
Jan 27 03:17:05 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:05.644297349Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-scheduler-running-upgrade-20220127031538-6703,Uid:3478da2c440ba32fb6c087b3f3b99813,Namespace:kube-system,Attempt:0,}"
Jan 27 03:17:05 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:05.644388013Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-apiserver-running-upgrade-20220127031538-6703,Uid:8d4a75d38cddca902e7c95dda0b36b76,Namespace:kube-system,Attempt:0,}"
Jan 27 03:17:05 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:05.644469219Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-controller-manager-running-upgrade-20220127031538-6703,Uid:a3e7be694ef7cf952503c5d331abc0ac,Namespace:kube-system,Attempt:0,}"
Jan 27 03:17:05 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:05.644662192Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:etcd-running-upgrade-20220127031538-6703,Uid:047b1dabd2a0c8bbc03a956e423aeb4e,Namespace:kube-system,Attempt:0,}"
Jan 27 03:17:05 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:05.952733661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d9kxz,Uid:7573f936-998f-42ea-834e-ae5675f3e07d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = NotFound desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": blob sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f expected at /var/lib/containerd/io.containerd.content.v1.content/blobs/sha256/927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f: not found"
Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.371307697Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-running-upgrade-20220127031538-6703,Uid:8d4a75d38cddca902e7c95dda0b36b76,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = NotFound desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": blob sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f expected at /var/lib/containerd/io.containerd.content.v1.content/blobs/sha256/927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f: not found"
Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.422554742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-running-upgrade-20220127031538-6703,Uid:a3e7be694ef7cf952503c5d331abc0ac,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = NotFound desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": blob sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f expected at /var/lib/containerd/io.containerd.content.v1.content/blobs/sha256/927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f: not found"
Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.509206236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-running-upgrade-20220127031538-6703,Uid:3478da2c440ba32fb6c087b3f3b99813,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = NotFound desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": blob sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f expected at /var/lib/containerd/io.containerd.content.v1.content/blobs/sha256/927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f: not found"
Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.528476770Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-running-upgrade-20220127031538-6703,Uid:047b1dabd2a0c8bbc03a956e423aeb4e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = NotFound desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": blob sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f expected at /var/lib/containerd/io.containerd.content.v1.content/blobs/sha256/927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f: not found"
Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.530614206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-g9t9p,Uid:1488662f-9013-40e3-bbf5-e3fafc03bffc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = NotFound desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": blob sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f expected at /var/lib/containerd/io.containerd.content.v1.content/blobs/sha256/927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f: not found"
Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.642365289Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-controller-manager-running-upgrade-20220127031538-6703,Uid:a3e7be694ef7cf952503c5d331abc0ac,Namespace:kube-system,Attempt:0,}"
Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.642422912Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-scheduler-running-upgrade-20220127031538-6703,Uid:3478da2c440ba32fb6c087b3f3b99813,Namespace:kube-system,Attempt:0,}"
Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.642369399Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-apiserver-running-upgrade-20220127031538-6703,Uid:8d4a75d38cddca902e7c95dda0b36b76,Namespace:kube-system,Attempt:0,}"
Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.642652848Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-g9t9p,Uid:1488662f-9013-40e3-bbf5-e3fafc03bffc,Namespace:kube-system,Attempt:0,}"
Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.642828011Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:etcd-running-upgrade-20220127031538-6703,Uid:047b1dabd2a0c8bbc03a956e423aeb4e,Namespace:kube-system,Attempt:0,}"
Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.642866758Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-d9kxz,Uid:7573f936-998f-42ea-834e-ae5675f3e07d,Namespace:kube-system,Attempt:0,}"
Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.960099940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-g9t9p,Uid:1488662f-9013-40e3-bbf5-e3fafc03bffc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Canceled desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": failed to resolve reference \"k8s.gcr.io/pause:3.2\": failed to do request: context canceled"
Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.969463097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-running-upgrade-20220127031538-6703,Uid:3478da2c440ba32fb6c087b3f3b99813,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Canceled desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": failed to resolve reference \"k8s.gcr.io/pause:3.2\": failed to do request: context canceled"
Jan 27 03:17:06 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:06.992096286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-running-upgrade-20220127031538-6703,Uid:8d4a75d38cddca902e7c95dda0b36b76,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Canceled desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": failed to resolve reference \"k8s.gcr.io/pause:3.2\": failed to do request: context canceled"
Jan 27 03:17:07 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:07.049775211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-running-upgrade-20220127031538-6703,Uid:047b1dabd2a0c8bbc03a956e423aeb4e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Canceled desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": failed to resolve reference \"k8s.gcr.io/pause:3.2\": failed to do request: context canceled"
Jan 27 03:17:07 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:07.104427763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d9kxz,Uid:7573f936-998f-42ea-834e-ae5675f3e07d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Canceled desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": failed to resolve reference \"k8s.gcr.io/pause:3.2\": failed to do request: context canceled"
Jan 27 03:17:07 running-upgrade-20220127031538-6703 containerd[2913]: time="2022-01-27T03:17:07.145721347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-running-upgrade-20220127031538-6703,Uid:a3e7be694ef7cf952503c5d331abc0ac,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = NotFound desc = failed to get sandbox image \"k8s.gcr.io/pause:3.2\": failed to pull image \"k8s.gcr.io/pause:3.2\": failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": blob sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f expected at /var/lib/containerd/io.containerd.content.v1.content/blobs/sha256/927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f: not found"
*
* ==> describe nodes <==
* Name: running-upgrade-20220127031538-6703
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=running-upgrade-20220127031538-6703
kubernetes.io/os=linux
minikube.k8s.io/commit=9f1e482427589ff8451c4723b6ba53bb9742fbb1
minikube.k8s.io/name=running-upgrade-20220127031538-6703
minikube.k8s.io/updated_at=2022_01_27T03_16_34_0700
minikube.k8s.io/version=v1.16.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 27 Jan 2022 03:16:26 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: running-upgrade-20220127031538-6703
AcquireTime: <unset>
RenewTime: Thu, 27 Jan 2022 03:17:01 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 27 Jan 2022 03:17:01 +0000 Thu, 27 Jan 2022 03:16:22 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 27 Jan 2022 03:17:01 +0000 Thu, 27 Jan 2022 03:16:22 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 27 Jan 2022 03:17:01 +0000 Thu, 27 Jan 2022 03:16:22 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 27 Jan 2022 03:17:01 +0000 Thu, 27 Jan 2022 03:16:41 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Addresses:
InternalIP: 192.168.59.48
Hostname: running-upgrade-20220127031538-6703
Capacity:
cpu: 8
ephemeral-storage: 304695084Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32879776Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304695084Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32879776Ki
pods: 110
System Info:
Machine ID: 8f006d88ab0e4ddfa46e7b7e641ee4b5
System UUID: 290805a4-ff96-4709-8064-a94b26b5c979
Boot ID: 2a5b9f9a-2bf2-4729-9d70-81647bd52771
Kernel Version: 5.11.0-1028-gcp
OS Image: Ubuntu 20.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.4.3
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system etcd-running-upgrade-20220127031538-6703 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 39s
kube-system kindnet-g9t9p 100m (1%!)(MISSING) 100m (1%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 23s
kube-system kube-apiserver-running-upgrade-20220127031538-6703 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 39s
kube-system kube-controller-manager-running-upgrade-20220127031538-6703 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 39s
kube-system kube-proxy-d9kxz 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 23s
kube-system kube-scheduler-running-upgrade-20220127031538-6703 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 39s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (9%!)(MISSING) 100m (1%!)(MISSING)
memory 150Mi (0%!)(MISSING) 50Mi (0%!)(MISSING)
ephemeral-storage 100Mi (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 50s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 50s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 49s (x7 over 50s) kubelet Node running-upgrade-20220127031538-6703 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 49s (x7 over 50s) kubelet Node running-upgrade-20220127031538-6703 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 49s (x6 over 50s) kubelet Node running-upgrade-20220127031538-6703 status is now: NodeHasSufficientPID
Normal Starting 39s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 39s kubelet Node running-upgrade-20220127031538-6703 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 39s kubelet Node running-upgrade-20220127031538-6703 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 39s kubelet Node running-upgrade-20220127031538-6703 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 39s kubelet Updated Node Allocatable limit across pods
Normal NodeNotReady 29s kubelet Node running-upgrade-20220127031538-6703 status is now: NodeNotReady
Normal Starting 22s kube-proxy Starting kube-proxy.
*
* ==> dmesg <==
* [ +0.000259] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth77835499
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 6a 9f 15 52 14 12 08 06
[Jan27 03:02] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth23474242
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 5f 5c 02 63 f5 08 06
[ +0.952463] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth6a565172
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e 93 08 55 00 8f 08 06
[Jan27 03:05] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethf5d45a43
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 6a a7 5f 08 08 bf 08 06
[ +0.972599] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethc4d22b01
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e de 43 25 aa 3b 08 06
[Jan27 03:08] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth3295a1cb
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 7c 93 33 3b 31 08 06
[Jan27 03:09] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth269009dd
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 cf 3c da 5a e9 08 06
[Jan27 03:10] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethd4518a3b
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 02 92 1c df b7 d3 08 06
[Jan27 03:13] process 'docker/tmp/qemu-check352080006/check' started with executable stack
[ +2.712247] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethfaae05be
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a f9 5f 21 1c bd 08 06
[ +1.484257] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth07e4b604
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 31 3e aa c4 4d 08 06
[Jan27 03:16] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth3c97916f
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 16 68 a1 00 34 f0 08 06
[ +29.713722] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethab942597
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 0a 29 19 cd 53 cf 08 06
*
* ==> kernel <==
* 03:17:10 up 59 min, 0 users, load average: 7.67, 4.76, 2.48
Linux running-upgrade-20220127031538-6703 5.11.0-1028-gcp #32~20.04.1-Ubuntu SMP Wed Jan 12 20:08:27 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.1 LTS"
*
* ==> kubelet <==
* -- Logs begin at Thu 2022-01-27 03:15:48 UTC, end at Thu 2022-01-27 03:17:10 UTC. --
Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.916350 2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.916536 2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.916703 2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.916851 2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.917007 2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.917161 2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.917319 2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.917551 2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.917719 2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.917878 2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.918030 2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.918184 2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.918343 2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.918496 2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.918661 2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.918816 2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.918986 2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.919679 2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.919965 2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.920185 2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.920373 2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
Jan 27 03:16:50 running-upgrade-20220127031538-6703 kubelet[2119]: W0127 03:16:50.920556 2119 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout". Reconnecting...
Jan 27 03:17:06 running-upgrade-20220127031538-6703 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
Jan 27 03:17:06 running-upgrade-20220127031538-6703 systemd[1]: kubelet.service: Succeeded.
Jan 27 03:17:06 running-upgrade-20220127031538-6703 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
-- /stdout --
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p running-upgrade-20220127031538-6703 -n running-upgrade-20220127031538-6703
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p running-upgrade-20220127031538-6703 -n running-upgrade-20220127031538-6703: exit status 2 (604.029969ms)
-- stdout --
Running
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run: kubectl --context running-upgrade-20220127031538-6703 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-74ff55c5b-9mfqg etcd-running-upgrade-20220127031538-6703 kindnet-g9t9p kube-apiserver-running-upgrade-20220127031538-6703 kube-controller-manager-running-upgrade-20220127031538-6703 kube-proxy-d9kxz kube-scheduler-running-upgrade-20220127031538-6703 storage-provisioner
helpers_test.go:273: ======> post-mortem[TestRunningBinaryUpgrade]: describe non-running pods <======
helpers_test.go:276: (dbg) Run: kubectl --context running-upgrade-20220127031538-6703 describe pod coredns-74ff55c5b-9mfqg etcd-running-upgrade-20220127031538-6703 kindnet-g9t9p kube-apiserver-running-upgrade-20220127031538-6703 kube-controller-manager-running-upgrade-20220127031538-6703 kube-proxy-d9kxz kube-scheduler-running-upgrade-20220127031538-6703 storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context running-upgrade-20220127031538-6703 describe pod coredns-74ff55c5b-9mfqg etcd-running-upgrade-20220127031538-6703 kindnet-g9t9p kube-apiserver-running-upgrade-20220127031538-6703 kube-controller-manager-running-upgrade-20220127031538-6703 kube-proxy-d9kxz kube-scheduler-running-upgrade-20220127031538-6703 storage-provisioner: exit status 1 (93.801982ms)
** stderr **
Error from server (NotFound): pods "coredns-74ff55c5b-9mfqg" not found
Error from server (NotFound): pods "etcd-running-upgrade-20220127031538-6703" not found
Error from server (NotFound): pods "kindnet-g9t9p" not found
Error from server (NotFound): pods "kube-apiserver-running-upgrade-20220127031538-6703" not found
Error from server (NotFound): pods "kube-controller-manager-running-upgrade-20220127031538-6703" not found
Error from server (NotFound): pods "kube-proxy-d9kxz" not found
Error from server (NotFound): pods "kube-scheduler-running-upgrade-20220127031538-6703" not found
Error from server (NotFound): pods "storage-provisioner" not found
** /stderr **
helpers_test.go:278: kubectl --context running-upgrade-20220127031538-6703 describe pod coredns-74ff55c5b-9mfqg etcd-running-upgrade-20220127031538-6703 kindnet-g9t9p kube-apiserver-running-upgrade-20220127031538-6703 kube-controller-manager-running-upgrade-20220127031538-6703 kube-proxy-d9kxz kube-scheduler-running-upgrade-20220127031538-6703 storage-provisioner: exit status 1
helpers_test.go:176: Cleaning up "running-upgrade-20220127031538-6703" profile ...
helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p running-upgrade-20220127031538-6703
=== CONT TestRunningBinaryUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20220127031538-6703: (2.717611934s)
--- FAIL: TestRunningBinaryUpgrade (95.92s)