=== RUN TestOffline
=== PAUSE TestOffline
=== CONT TestOffline
=== CONT TestOffline
aab_offline_test.go:56: (dbg) Run: out/minikube-linux-amd64 start -p offline-containerd-20220202225402-591014 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker --container-runtime=containerd
=== CONT TestOffline
aab_offline_test.go:56: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p offline-containerd-20220202225402-591014 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker --container-runtime=containerd: exit status 80 (8m46.958410515s)
-- stdout --
* [offline-containerd-20220202225402-591014] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=13251
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the docker driver based on user configuration
- More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
* Starting control plane node offline-containerd-20220202225402-591014 in cluster offline-containerd-20220202225402-591014
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2048MB) ...
* docker "offline-containerd-20220202225402-591014" container is missing, will recreate.
* Creating docker container (CPUs=2, Memory=2048MB) ...
* Found network options:
- HTTP_PROXY=172.16.1.1:1
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.23.2 on containerd 1.4.12 ...
- env HTTP_PROXY=172.16.1.1:1
- kubelet.housekeeping-interval=5m
- kubelet.cni-conf-dir=/etc/cni/net.mk
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
-- /stdout --
** stderr **
I0202 22:54:02.950553 708547 out.go:297] Setting OutFile to fd 1 ...
I0202 22:54:02.950666 708547 out.go:344] TERM=,COLORTERM=, which probably does not support color
I0202 22:54:02.950681 708547 out.go:310] Setting ErrFile to fd 2...
I0202 22:54:02.950687 708547 out.go:344] TERM=,COLORTERM=, which probably does not support color
I0202 22:54:02.950836 708547 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/bin
I0202 22:54:02.951259 708547 out.go:304] Setting JSON to false
I0202 22:54:02.952921 708547 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":23794,"bootTime":1643818649,"procs":616,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0202 22:54:02.953007 708547 start.go:122] virtualization: kvm guest
I0202 22:54:02.956259 708547 out.go:176] * [offline-containerd-20220202225402-591014] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
I0202 22:54:02.959752 708547 out.go:176] - MINIKUBE_LOCATION=13251
I0202 22:54:02.956528 708547 notify.go:174] Checking for updates...
I0202 22:54:02.962567 708547 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0202 22:54:02.965613 708547 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
I0202 22:54:02.968229 708547 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
I0202 22:54:02.970291 708547 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64
I0202 22:54:02.970810 708547 driver.go:344] Setting default libvirt URI to qemu:///system
I0202 22:54:03.026831 708547 docker.go:132] docker version: linux-20.10.12
I0202 22:54:03.026959 708547 cli_runner.go:133] Run: docker system info --format "{{json .}}"
I0202 22:54:03.161119 708547 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:38 SystemTime:2022-02-02 22:54:03.070251829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
I0202 22:54:03.161299 708547 docker.go:237] overlay module found
I0202 22:54:03.165576 708547 out.go:176] * Using the docker driver based on user configuration
I0202 22:54:03.165627 708547 start.go:281] selected driver: docker
I0202 22:54:03.165636 708547 start.go:798] validating driver "docker" against <nil>
I0202 22:54:03.165666 708547 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
W0202 22:54:03.165745 708547 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W0202 22:54:03.165779 708547 out.go:241] ! Your cgroup does not allow setting memory.
! Your cgroup does not allow setting memory.
I0202 22:54:03.168041 708547 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
I0202 22:54:03.168846 708547 cli_runner.go:133] Run: docker system info --format "{{json .}}"
I0202 22:54:03.297772 708547 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:36 SystemTime:2022-02-02 22:54:03.20749307 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
I0202 22:54:03.297952 708547 start_flags.go:288] no existing cluster config was found, will generate one from the flags
I0202 22:54:03.298245 708547 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
I0202 22:54:03.298291 708547 start_flags.go:831] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0202 22:54:03.298329 708547 cni.go:93] Creating CNI manager for ""
I0202 22:54:03.298347 708547 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
I0202 22:54:03.298370 708547 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I0202 22:54:03.298383 708547 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I0202 22:54:03.298396 708547 start_flags.go:297] Found "CNI" CNI - setting NetworkPlugin=cni
I0202 22:54:03.298414 708547 start_flags.go:302] config:
{Name:offline-containerd-20220202225402-591014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:offline-containerd-20220202225402-591014 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
I0202 22:54:03.301251 708547 out.go:176] * Starting control plane node offline-containerd-20220202225402-591014 in cluster offline-containerd-20220202225402-591014
I0202 22:54:03.301317 708547 cache.go:120] Beginning downloading kic base image for docker with containerd
I0202 22:54:03.303211 708547 out.go:176] * Pulling base image ...
I0202 22:54:03.303258 708547 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime containerd
I0202 22:54:03.303299 708547 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-containerd-overlay2-amd64.tar.lz4
I0202 22:54:03.303313 708547 cache.go:57] Caching tarball of preloaded images
I0202 22:54:03.303352 708547 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
I0202 22:54:03.303535 708547 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0202 22:54:03.303552 708547 cache.go:60] Finished verifying existence of preloaded tar for v1.23.2 on containerd
I0202 22:54:03.303910 708547 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/config.json ...
I0202 22:54:03.303940 708547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/config.json: {Name:mkdc45ead08995ed5ac67432f5542bd8bf08248f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0202 22:54:03.352397 708547 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
I0202 22:54:03.352437 708547 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
I0202 22:54:03.352459 708547 cache.go:208] Successfully downloaded all kic artifacts
I0202 22:54:03.352514 708547 start.go:313] acquiring machines lock for offline-containerd-20220202225402-591014: {Name:mk0dcd7eb144d7d0cb44959855f779d6eeabf424 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0202 22:54:03.352691 708547 start.go:317] acquired machines lock for "offline-containerd-20220202225402-591014" in 148.686µs
I0202 22:54:03.352729 708547 start.go:89] Provisioning new machine with config: &{Name:offline-containerd-20220202225402-591014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:offline-containerd-20220202225402-591014 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSi
ze:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0202 22:54:03.352854 708547 start.go:126] createHost starting for "" (driver="docker")
I0202 22:54:03.355796 708547 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
I0202 22:54:03.356122 708547 start.go:160] libmachine.API.Create for "offline-containerd-20220202225402-591014" (driver="docker")
I0202 22:54:03.356164 708547 client.go:168] LocalClient.Create starting
I0202 22:54:03.356256 708547 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem
I0202 22:54:03.356308 708547 main.go:130] libmachine: Decoding PEM data...
I0202 22:54:03.356333 708547 main.go:130] libmachine: Parsing certificate...
I0202 22:54:03.356425 708547 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem
I0202 22:54:03.356452 708547 main.go:130] libmachine: Decoding PEM data...
I0202 22:54:03.356485 708547 main.go:130] libmachine: Parsing certificate...
I0202 22:54:03.356923 708547 cli_runner.go:133] Run: docker network inspect offline-containerd-20220202225402-591014 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0202 22:54:03.394247 708547 cli_runner.go:180] docker network inspect offline-containerd-20220202225402-591014 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0202 22:54:03.394368 708547 network_create.go:254] running [docker network inspect offline-containerd-20220202225402-591014] to gather additional debugging logs...
I0202 22:54:03.394413 708547 cli_runner.go:133] Run: docker network inspect offline-containerd-20220202225402-591014
W0202 22:54:03.433997 708547 cli_runner.go:180] docker network inspect offline-containerd-20220202225402-591014 returned with exit code 1
I0202 22:54:03.434041 708547 network_create.go:257] error running [docker network inspect offline-containerd-20220202225402-591014]: docker network inspect offline-containerd-20220202225402-591014: exit status 1
stdout:
[]
stderr:
Error: No such network: offline-containerd-20220202225402-591014
I0202 22:54:03.434058 708547 network_create.go:259] output of [docker network inspect offline-containerd-20220202225402-591014]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: offline-containerd-20220202225402-591014
** /stderr **
I0202 22:54:03.434133 708547 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0202 22:54:03.475979 708547 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000132ae8] misses:0}
I0202 22:54:03.476047 708547 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0202 22:54:03.476069 708547 network_create.go:106] attempt to create docker network offline-containerd-20220202225402-591014 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0202 22:54:03.476123 708547 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-containerd-20220202225402-591014
W0202 22:54:03.520338 708547 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-containerd-20220202225402-591014 returned with exit code 1
W0202 22:54:03.520400 708547 network_create.go:98] failed to create docker network offline-containerd-20220202225402-591014 192.168.49.0/24, will retry: subnet is taken
I0202 22:54:03.521591 708547 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-a8c9a3a82325 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ad:b4:e0:65}}
I0202 22:54:03.522439 708547 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000132ae8 192.168.58.0:0xc000010680] misses:0}
I0202 22:54:03.522480 708547 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0202 22:54:03.522493 708547 network_create.go:106] attempt to create docker network offline-containerd-20220202225402-591014 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0202 22:54:03.522544 708547 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-containerd-20220202225402-591014
I0202 22:54:03.623084 708547 network_create.go:90] docker network offline-containerd-20220202225402-591014 192.168.58.0/24 created
I0202 22:54:03.623136 708547 kic.go:106] calculated static IP "192.168.58.2" for the "offline-containerd-20220202225402-591014" container
I0202 22:54:03.623205 708547 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
I0202 22:54:03.665183 708547 cli_runner.go:133] Run: docker volume create offline-containerd-20220202225402-591014 --label name.minikube.sigs.k8s.io=offline-containerd-20220202225402-591014 --label created_by.minikube.sigs.k8s.io=true
I0202 22:54:03.705880 708547 oci.go:102] Successfully created a docker volume offline-containerd-20220202225402-591014
I0202 22:54:03.706033 708547 cli_runner.go:133] Run: docker run --rm --name offline-containerd-20220202225402-591014-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-containerd-20220202225402-591014 --entrypoint /usr/bin/test -v offline-containerd-20220202225402-591014:/var gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib
I0202 22:54:04.741915 708547 cli_runner.go:186] Completed: docker run --rm --name offline-containerd-20220202225402-591014-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-containerd-20220202225402-591014 --entrypoint /usr/bin/test -v offline-containerd-20220202225402-591014:/var gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib: (1.035823325s)
I0202 22:54:04.741960 708547 oci.go:106] Successfully prepared a docker volume offline-containerd-20220202225402-591014
I0202 22:54:04.741999 708547 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime containerd
I0202 22:54:04.742034 708547 kic.go:179] Starting extracting preloaded images to volume ...
I0202 22:54:04.742106 708547 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-containerd-20220202225402-591014:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir
I0202 22:54:27.017179 708547 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-containerd-20220202225402-591014:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir: (22.275023853s)
I0202 22:54:27.017209 708547 kic.go:188] duration metric: took 22.275173 seconds to extract preloaded images to volume
W0202 22:54:27.017263 708547 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0202 22:54:27.017277 708547 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0202 22:54:27.017348 708547 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
I0202 22:54:27.158911 708547 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname offline-containerd-20220202225402-591014 --name offline-containerd-20220202225402-591014 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-containerd-20220202225402-591014 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=offline-containerd-20220202225402-591014 --network offline-containerd-20220202225402-591014 --ip 192.168.58.2 --volume offline-containerd-20220202225402-591014:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b
W0202 22:54:27.254910 708547 cli_runner.go:180] docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname offline-containerd-20220202225402-591014 --name offline-containerd-20220202225402-591014 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-containerd-20220202225402-591014 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=offline-containerd-20220202225402-591014 --network offline-containerd-20220202225402-591014 --ip 192.168.58.2 --volume offline-containerd-20220202225402-591014:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b returned with exit code 125
I0202 22:54:27.254971 708547 client.go:171] LocalClient.Create took 23.898795069s
I0202 22:54:29.255769 708547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0202 22:54:29.255854 708547 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220202225402-591014
W0202 22:54:29.297522 708547 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220202225402-591014 returned with exit code 1
I0202 22:54:29.297638 708547 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0202 22:54:29.574914 708547 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220202225402-591014
W0202 22:54:29.618850 708547 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220202225402-591014 returned with exit code 1
I0202 22:54:29.618945 708547 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0202 22:54:30.159556 708547 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220202225402-591014
W0202 22:54:30.198999 708547 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220202225402-591014 returned with exit code 1
I0202 22:54:30.199060 708547 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0202 22:54:30.854922 708547 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220202225402-591014
W0202 22:54:30.902959 708547 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220202225402-591014 returned with exit code 1
W0202 22:54:30.903090 708547 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
W0202 22:54:30.903108 708547 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0202 22:54:30.903118 708547 start.go:129] duration metric: createHost completed in 27.550255514s
I0202 22:54:30.903127 708547 start.go:80] releasing machines lock for "offline-containerd-20220202225402-591014", held for 27.550417069s
W0202 22:54:30.903167 708547 start.go:570] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname offline-containerd-20220202225402-591014 --name offline-containerd-20220202225402-591014 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-containerd-20220202225402-591014 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=offline-containerd-20220202225402-591014 --network offline-containerd-20220202225402-591014 --ip 192.168.58.2 --volume offline-containerd-20220202225402-591014:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6
a1c30865f571c3b6388f9f56f9b: exit status 125
stdout:
dfc3dab0dc5627598c67b245cca353fe5fdde5613e2e147852525fa38222bf66
stderr:
docker: Error response from daemon: network offline-containerd-20220202225402-591014 not found.
I0202 22:54:30.903734 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
W0202 22:54:30.946367 708547 start.go:575] delete host: Docker machine "offline-containerd-20220202225402-591014" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
W0202 22:54:30.946629 708547 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname offline-containerd-20220202225402-591014 --name offline-containerd-20220202225402-591014 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-containerd-20220202225402-591014 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=offline-containerd-20220202225402-591014 --network offline-containerd-20220202225402-591014 --ip 192.168.58.2 --volume offline-containerd-20220202225402-591014:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a96780
10f390a0092f488f6a1c30865f571c3b6388f9f56f9b: exit status 125
stdout:
dfc3dab0dc5627598c67b245cca353fe5fdde5613e2e147852525fa38222bf66
stderr:
docker: Error response from daemon: network offline-containerd-20220202225402-591014 not found.
! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname offline-containerd-20220202225402-591014 --name offline-containerd-20220202225402-591014 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-containerd-20220202225402-591014 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=offline-containerd-20220202225402-591014 --network offline-containerd-20220202225402-591014 --ip 192.168.58.2 --volume offline-containerd-20220202225402-591014:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f
9b: exit status 125
stdout:
dfc3dab0dc5627598c67b245cca353fe5fdde5613e2e147852525fa38222bf66
stderr:
docker: Error response from daemon: network offline-containerd-20220202225402-591014 not found.
I0202 22:54:30.946651 708547 start.go:585] Will try again in 5 seconds ...
I0202 22:54:35.948620 708547 start.go:313] acquiring machines lock for offline-containerd-20220202225402-591014: {Name:mk0dcd7eb144d7d0cb44959855f779d6eeabf424 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0202 22:54:35.948800 708547 start.go:317] acquired machines lock for "offline-containerd-20220202225402-591014" in 128.025µs
I0202 22:54:35.948834 708547 start.go:93] Skipping create...Using existing machine configuration
I0202 22:54:35.948847 708547 fix.go:55] fixHost starting:
I0202 22:54:35.949213 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:54:35.987124 708547 fix.go:108] recreateIfNeeded on offline-containerd-20220202225402-591014: state= err=<nil>
I0202 22:54:35.987158 708547 fix.go:113] machineExists: false. err=machine does not exist
I0202 22:54:36.249963 708547 out.go:176] * docker "offline-containerd-20220202225402-591014" container is missing, will recreate.
I0202 22:54:36.250014 708547 delete.go:124] DEMOLISHING offline-containerd-20220202225402-591014 ...
I0202 22:54:36.250108 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:54:36.287930 708547 stop.go:79] host is in state
I0202 22:54:36.288026 708547 main.go:130] libmachine: Stopping "offline-containerd-20220202225402-591014"...
I0202 22:54:36.288147 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:54:36.328491 708547 kic_runner.go:93] Run: systemctl --version
I0202 22:54:36.328526 708547 kic_runner.go:114] Args: [docker exec --privileged offline-containerd-20220202225402-591014 systemctl --version]
I0202 22:54:36.366993 708547 kic_runner.go:93] Run: sudo service kubelet stop
I0202 22:54:36.367039 708547 kic_runner.go:114] Args: [docker exec --privileged offline-containerd-20220202225402-591014 sudo service kubelet stop]
I0202 22:54:36.404198 708547 openrc.go:165] stop output:
** stderr **
Error response from daemon: Container dfc3dab0dc5627598c67b245cca353fe5fdde5613e2e147852525fa38222bf66 is not running
** /stderr **
W0202 22:54:36.404232 708547 kic.go:443] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
stdout:
stderr:
Error response from daemon: Container dfc3dab0dc5627598c67b245cca353fe5fdde5613e2e147852525fa38222bf66 is not running
I0202 22:54:36.404302 708547 kic_runner.go:93] Run: sudo service kubelet stop
I0202 22:54:36.404312 708547 kic_runner.go:114] Args: [docker exec --privileged offline-containerd-20220202225402-591014 sudo service kubelet stop]
I0202 22:54:36.446724 708547 openrc.go:165] stop output:
** stderr **
Error response from daemon: Container dfc3dab0dc5627598c67b245cca353fe5fdde5613e2e147852525fa38222bf66 is not running
** /stderr **
W0202 22:54:36.446764 708547 kic.go:445] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
stdout:
stderr:
Error response from daemon: Container dfc3dab0dc5627598c67b245cca353fe5fdde5613e2e147852525fa38222bf66 is not running
I0202 22:54:36.446809 708547 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
I0202 22:54:36.446901 708547 kic_runner.go:93] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
I0202 22:54:36.446920 708547 kic_runner.go:114] Args: [docker exec --privileged offline-containerd-20220202225402-591014 sudo -s eval crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator]
I0202 22:54:36.485469 708547 kic.go:456] unable list containers : crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": exit status 1
stdout:
stderr:
Error response from daemon: Container dfc3dab0dc5627598c67b245cca353fe5fdde5613e2e147852525fa38222bf66 is not running
I0202 22:54:36.485496 708547 kic.go:466] successfully stopped kubernetes!
I0202 22:54:36.485551 708547 kic_runner.go:93] Run: pgrep kube-apiserver
I0202 22:54:36.485564 708547 kic_runner.go:114] Args: [docker exec --privileged offline-containerd-20220202225402-591014 pgrep kube-apiserver]
I0202 22:54:36.554557 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:54:39.589719 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:54:42.632635 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:54:45.665749 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:54:48.698907 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:54:51.735636 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:54:54.772625 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:54:57.809922 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:55:00.844628 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:55:03.879095 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:55:06.918004 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:55:09.959300 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:55:12.995863 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:55:16.032632 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:55:19.069742 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:55:22.106384 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:55:25.140967 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:55:28.177376 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:55:31.217105 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:55:34.256608 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:55:37.293761 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:55:40.331216 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:55:43.365321 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:55:46.401816 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:55:49.437323 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:55:52.472639 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:55:55.510353 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:55:58.546105 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:56:01.580649 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:56:04.616272 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:56:07.659982 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:56:10.693646 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:56:13.729775 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:56:16.766815 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:56:19.801401 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:56:22.837489 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:56:25.874120 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:56:28.915980 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:56:31.954551 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:56:34.992005 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:56:38.026676 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:56:41.064656 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:56:44.101079 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:56:47.136615 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:56:50.172689 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:56:53.214443 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:56:56.256183 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:56:59.295639 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:57:02.332579 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:57:05.368643 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:57:08.405589 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:57:11.440708 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:57:14.478103 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:57:17.516625 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:57:20.551645 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:57:23.586562 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:57:26.623753 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:57:29.659817 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:57:32.696656 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:57:35.735990 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:57:38.772221 708547 stop.go:59] stop err: Maximum number of retries (60) exceeded
I0202 22:57:38.772304 708547 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
I0202 22:57:38.772895 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
W0202 22:57:38.808713 708547 delete.go:135] deletehost failed: Docker machine "offline-containerd-20220202225402-591014" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
I0202 22:57:38.808791 708547 cli_runner.go:133] Run: docker container inspect -f {{.Id}} offline-containerd-20220202225402-591014
I0202 22:57:38.842499 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:57:38.877615 708547 cli_runner.go:133] Run: docker exec --privileged -t offline-containerd-20220202225402-591014 /bin/bash -c "sudo init 0"
W0202 22:57:38.913684 708547 cli_runner.go:180] docker exec --privileged -t offline-containerd-20220202225402-591014 /bin/bash -c "sudo init 0" returned with exit code 1
I0202 22:57:38.913724 708547 oci.go:659] error shutdown offline-containerd-20220202225402-591014: docker exec --privileged -t offline-containerd-20220202225402-591014 /bin/bash -c "sudo init 0": exit status 1
stdout:
stderr:
Error response from daemon: Container dfc3dab0dc5627598c67b245cca353fe5fdde5613e2e147852525fa38222bf66 is not running
I0202 22:57:39.913912 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:57:39.950533 708547 oci.go:673] temporary error: container offline-containerd-20220202225402-591014 status is but expect it to be exited
I0202 22:57:39.950565 708547 oci.go:679] Successfully shutdown container offline-containerd-20220202225402-591014
I0202 22:57:39.950609 708547 cli_runner.go:133] Run: docker rm -f -v offline-containerd-20220202225402-591014
I0202 22:57:39.994030 708547 cli_runner.go:133] Run: docker container inspect -f {{.Id}} offline-containerd-20220202225402-591014
W0202 22:57:40.028683 708547 cli_runner.go:180] docker container inspect -f {{.Id}} offline-containerd-20220202225402-591014 returned with exit code 1
I0202 22:57:40.028761 708547 cli_runner.go:133] Run: docker network inspect offline-containerd-20220202225402-591014 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0202 22:57:40.067224 708547 cli_runner.go:180] docker network inspect offline-containerd-20220202225402-591014 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0202 22:57:40.067298 708547 network_create.go:254] running [docker network inspect offline-containerd-20220202225402-591014] to gather additional debugging logs...
I0202 22:57:40.067338 708547 cli_runner.go:133] Run: docker network inspect offline-containerd-20220202225402-591014
W0202 22:57:40.104010 708547 cli_runner.go:180] docker network inspect offline-containerd-20220202225402-591014 returned with exit code 1
I0202 22:57:40.104043 708547 network_create.go:257] error running [docker network inspect offline-containerd-20220202225402-591014]: docker network inspect offline-containerd-20220202225402-591014: exit status 1
stdout:
[]
stderr:
Error: No such network: offline-containerd-20220202225402-591014
I0202 22:57:40.104059 708547 network_create.go:259] output of [docker network inspect offline-containerd-20220202225402-591014]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: offline-containerd-20220202225402-591014
** /stderr **
W0202 22:57:40.104191 708547 delete.go:139] delete failed (probably ok) <nil>
I0202 22:57:40.104205 708547 fix.go:120] Sleeping 1 second for extra luck!
I0202 22:57:41.104342 708547 start.go:126] createHost starting for "" (driver="docker")
I0202 22:57:41.108522 708547 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
I0202 22:57:41.108695 708547 start.go:160] libmachine.API.Create for "offline-containerd-20220202225402-591014" (driver="docker")
I0202 22:57:41.108747 708547 client.go:168] LocalClient.Create starting
I0202 22:57:41.108843 708547 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem
I0202 22:57:41.108886 708547 main.go:130] libmachine: Decoding PEM data...
I0202 22:57:41.108909 708547 main.go:130] libmachine: Parsing certificate...
I0202 22:57:41.108985 708547 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem
I0202 22:57:41.109013 708547 main.go:130] libmachine: Decoding PEM data...
I0202 22:57:41.109026 708547 main.go:130] libmachine: Parsing certificate...
I0202 22:57:41.109327 708547 cli_runner.go:133] Run: docker network inspect offline-containerd-20220202225402-591014 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0202 22:57:41.146924 708547 cli_runner.go:180] docker network inspect offline-containerd-20220202225402-591014 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0202 22:57:41.146997 708547 network_create.go:254] running [docker network inspect offline-containerd-20220202225402-591014] to gather additional debugging logs...
I0202 22:57:41.147019 708547 cli_runner.go:133] Run: docker network inspect offline-containerd-20220202225402-591014
W0202 22:57:41.184695 708547 cli_runner.go:180] docker network inspect offline-containerd-20220202225402-591014 returned with exit code 1
I0202 22:57:41.184735 708547 network_create.go:257] error running [docker network inspect offline-containerd-20220202225402-591014]: docker network inspect offline-containerd-20220202225402-591014: exit status 1
stdout:
[]
stderr:
Error: No such network: offline-containerd-20220202225402-591014
I0202 22:57:41.184751 708547 network_create.go:259] output of [docker network inspect offline-containerd-20220202225402-591014]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: offline-containerd-20220202225402-591014
** /stderr **
I0202 22:57:41.184810 708547 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0202 22:57:41.231606 708547 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-bff1edb0cc2e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:59:04:78:d4}}
I0202 22:57:41.232673 708547 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-376c229d80e5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:49:90:84:68}}
I0202 22:57:41.233625 708547 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000132ae8 192.168.58.0:0xc000010680 192.168.67.0:0xc000010230] misses:0}
I0202 22:57:41.233675 708547 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0202 22:57:41.233692 708547 network_create.go:106] attempt to create docker network offline-containerd-20220202225402-591014 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I0202 22:57:41.233771 708547 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-containerd-20220202225402-591014
I0202 22:57:41.319088 708547 network_create.go:90] docker network offline-containerd-20220202225402-591014 192.168.67.0/24 created
I0202 22:57:41.319143 708547 kic.go:106] calculated static IP "192.168.67.2" for the "offline-containerd-20220202225402-591014" container
I0202 22:57:41.319224 708547 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
I0202 22:57:41.360092 708547 cli_runner.go:133] Run: docker volume create offline-containerd-20220202225402-591014 --label name.minikube.sigs.k8s.io=offline-containerd-20220202225402-591014 --label created_by.minikube.sigs.k8s.io=true
I0202 22:57:41.400788 708547 oci.go:102] Successfully created a docker volume offline-containerd-20220202225402-591014
I0202 22:57:41.400876 708547 cli_runner.go:133] Run: docker run --rm --name offline-containerd-20220202225402-591014-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-containerd-20220202225402-591014 --entrypoint /usr/bin/test -v offline-containerd-20220202225402-591014:/var gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib
I0202 22:57:42.103125 708547 oci.go:106] Successfully prepared a docker volume offline-containerd-20220202225402-591014
I0202 22:57:42.103205 708547 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime containerd
I0202 22:57:42.103233 708547 kic.go:179] Starting extracting preloaded images to volume ...
I0202 22:57:42.103326 708547 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-containerd-20220202225402-591014:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir
I0202 22:58:06.894536 708547 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-containerd-20220202225402-591014:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir: (24.791151848s)
I0202 22:58:06.894573 708547 kic.go:188] duration metric: took 24.791336 seconds to extract preloaded images to volume
W0202 22:58:06.894616 708547 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0202 22:58:06.894624 708547 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0202 22:58:06.894674 708547 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
I0202 22:58:07.000184 708547 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname offline-containerd-20220202225402-591014 --name offline-containerd-20220202225402-591014 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-containerd-20220202225402-591014 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=offline-containerd-20220202225402-591014 --network offline-containerd-20220202225402-591014 --ip 192.168.67.2 --volume offline-containerd-20220202225402-591014:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b
I0202 22:58:07.596805 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Running}}
I0202 22:58:07.644647 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:58:07.685968 708547 cli_runner.go:133] Run: docker exec offline-containerd-20220202225402-591014 stat /var/lib/dpkg/alternatives/iptables
I0202 22:58:07.772976 708547 oci.go:281] the created container "offline-containerd-20220202225402-591014" has a running status.
I0202 22:58:07.773017 708547 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/offline-containerd-20220202225402-591014/id_rsa...
I0202 22:58:07.854693 708547 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/offline-containerd-20220202225402-591014/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0202 22:58:07.985672 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:58:08.071060 708547 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0202 22:58:08.071087 708547 kic_runner.go:114] Args: [docker exec --privileged offline-containerd-20220202225402-591014 chown docker:docker /home/docker/.ssh/authorized_keys]
I0202 22:58:08.207894 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:58:08.289129 708547 machine.go:88] provisioning docker machine ...
I0202 22:58:08.289181 708547 ubuntu.go:169] provisioning hostname "offline-containerd-20220202225402-591014"
I0202 22:58:08.289244 708547 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220202225402-591014
I0202 22:58:08.351676 708547 main.go:130] libmachine: Using SSH client type: native
I0202 22:58:08.352043 708547 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil> [] 0s} 127.0.0.1 49843 <nil> <nil>}
I0202 22:58:08.352070 708547 main.go:130] libmachine: About to run SSH command:
sudo hostname offline-containerd-20220202225402-591014 && echo "offline-containerd-20220202225402-591014" | sudo tee /etc/hostname
I0202 22:58:08.524886 708547 main.go:130] libmachine: SSH cmd err, output: <nil>: offline-containerd-20220202225402-591014
I0202 22:58:08.524975 708547 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220202225402-591014
I0202 22:58:08.579056 708547 main.go:130] libmachine: Using SSH client type: native
I0202 22:58:08.579239 708547 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil> [] 0s} 127.0.0.1 49843 <nil> <nil>}
I0202 22:58:08.579256 708547 main.go:130] libmachine: About to run SSH command:
if ! grep -xq '.*\soffline-containerd-20220202225402-591014' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 offline-containerd-20220202225402-591014/g' /etc/hosts;
else
echo '127.0.1.1 offline-containerd-20220202225402-591014' | sudo tee -a /etc/hosts;
fi
fi
I0202 22:58:08.724846 708547 main.go:130] libmachine: SSH cmd err, output: <nil>:
I0202 22:58:08.724892 708547 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube}
I0202 22:58:08.724918 708547 ubuntu.go:177] setting up certificates
I0202 22:58:08.724931 708547 provision.go:83] configureAuth start
I0202 22:58:08.724992 708547 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" offline-containerd-20220202225402-591014
I0202 22:58:08.769681 708547 provision.go:138] copyHostCerts
I0202 22:58:08.769757 708547 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem, removing ...
I0202 22:58:08.769768 708547 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem
I0202 22:58:08.769824 708547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem (1078 bytes)
I0202 22:58:08.769926 708547 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem, removing ...
I0202 22:58:08.769939 708547 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem
I0202 22:58:08.769974 708547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem (1123 bytes)
I0202 22:58:08.770055 708547 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem, removing ...
I0202 22:58:08.770060 708547 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem
I0202 22:58:08.770081 708547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem (1675 bytes)
I0202 22:58:08.770133 708547 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem org=jenkins.offline-containerd-20220202225402-591014 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube offline-containerd-20220202225402-591014]
I0202 22:58:08.944721 708547 provision.go:172] copyRemoteCerts
I0202 22:58:08.944789 708547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0202 22:58:08.944832 708547 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220202225402-591014
I0202 22:58:08.983553 708547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49843 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/offline-containerd-20220202225402-591014/id_rsa Username:docker}
I0202 22:58:09.081011 708547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0202 22:58:09.102329 708547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
I0202 22:58:09.121687 708547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0202 22:58:09.141537 708547 provision.go:86] duration metric: configureAuth took 416.585583ms
I0202 22:58:09.141574 708547 ubuntu.go:193] setting minikube options for container-runtime
I0202 22:58:09.141777 708547 config.go:176] Loaded profile config "offline-containerd-20220202225402-591014": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2
I0202 22:58:09.141793 708547 machine.go:91] provisioned docker machine in 852.636954ms
I0202 22:58:09.141800 708547 client.go:171] LocalClient.Create took 28.033042826s
I0202 22:58:09.141823 708547 start.go:168] duration metric: libmachine.API.Create for "offline-containerd-20220202225402-591014" took 28.033128288s
I0202 22:58:09.141837 708547 start.go:267] post-start starting for "offline-containerd-20220202225402-591014" (driver="docker")
I0202 22:58:09.141844 708547 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0202 22:58:09.141891 708547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0202 22:58:09.141938 708547 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220202225402-591014
I0202 22:58:09.180714 708547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49843 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/offline-containerd-20220202225402-591014/id_rsa Username:docker}
I0202 22:58:09.281260 708547 ssh_runner.go:195] Run: cat /etc/os-release
I0202 22:58:09.284243 708547 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0202 22:58:09.284274 708547 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0202 22:58:09.284286 708547 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0202 22:58:09.284294 708547 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0202 22:58:09.284305 708547 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/addons for local assets ...
I0202 22:58:09.284356 708547 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files for local assets ...
I0202 22:58:09.284439 708547 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/5910142.pem -> 5910142.pem in /etc/ssl/certs
I0202 22:58:09.284572 708547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0202 22:58:09.291913 708547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/5910142.pem --> /etc/ssl/certs/5910142.pem (1708 bytes)
I0202 22:58:09.311928 708547 start.go:270] post-start completed in 170.075038ms
I0202 22:58:09.312369 708547 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" offline-containerd-20220202225402-591014
I0202 22:58:09.359400 708547 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/config.json ...
I0202 22:58:09.359682 708547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0202 22:58:09.359729 708547 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220202225402-591014
I0202 22:58:09.410505 708547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49843 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/offline-containerd-20220202225402-591014/id_rsa Username:docker}
I0202 22:58:09.510039 708547 start.go:129] duration metric: createHost completed in 28.405654592s
I0202 22:58:09.510112 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
W0202 22:58:09.547152 708547 fix.go:134] unexpected machine state, will restart: <nil>
I0202 22:58:09.547200 708547 machine.go:88] provisioning docker machine ...
I0202 22:58:09.547226 708547 ubuntu.go:169] provisioning hostname "offline-containerd-20220202225402-591014"
I0202 22:58:09.547302 708547 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220202225402-591014
I0202 22:58:09.586676 708547 main.go:130] libmachine: Using SSH client type: native
I0202 22:58:09.586877 708547 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil> [] 0s} 127.0.0.1 49843 <nil> <nil>}
I0202 22:58:09.586899 708547 main.go:130] libmachine: About to run SSH command:
sudo hostname offline-containerd-20220202225402-591014 && echo "offline-containerd-20220202225402-591014" | sudo tee /etc/hostname
I0202 22:58:09.738373 708547 main.go:130] libmachine: SSH cmd err, output: <nil>: offline-containerd-20220202225402-591014
I0202 22:58:09.738459 708547 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220202225402-591014
I0202 22:58:09.778767 708547 main.go:130] libmachine: Using SSH client type: native
I0202 22:58:09.778964 708547 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil> [] 0s} 127.0.0.1 49843 <nil> <nil>}
I0202 22:58:09.778996 708547 main.go:130] libmachine: About to run SSH command:
if ! grep -xq '.*\soffline-containerd-20220202225402-591014' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 offline-containerd-20220202225402-591014/g' /etc/hosts;
else
echo '127.0.1.1 offline-containerd-20220202225402-591014' | sudo tee -a /etc/hosts;
fi
fi
I0202 22:58:09.921304 708547 main.go:130] libmachine: SSH cmd err, output: <nil>:
I0202 22:58:09.921347 708547 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube}
I0202 22:58:09.921386 708547 ubuntu.go:177] setting up certificates
I0202 22:58:09.921405 708547 provision.go:83] configureAuth start
I0202 22:58:09.921468 708547 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" offline-containerd-20220202225402-591014
I0202 22:58:09.956314 708547 provision.go:138] copyHostCerts
I0202 22:58:09.956381 708547 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem, removing ...
I0202 22:58:09.956397 708547 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem
I0202 22:58:09.956452 708547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem (1078 bytes)
I0202 22:58:09.956574 708547 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem, removing ...
I0202 22:58:09.956591 708547 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem
I0202 22:58:09.956622 708547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem (1123 bytes)
I0202 22:58:09.956710 708547 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem, removing ...
I0202 22:58:09.956729 708547 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem
I0202 22:58:09.956749 708547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem (1675 bytes)
I0202 22:58:09.956791 708547 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem org=jenkins.offline-containerd-20220202225402-591014 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube offline-containerd-20220202225402-591014]
I0202 22:58:10.103884 708547 provision.go:172] copyRemoteCerts
I0202 22:58:10.103944 708547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0202 22:58:10.103979 708547 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220202225402-591014
I0202 22:58:10.138114 708547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49843 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/offline-containerd-20220202225402-591014/id_rsa Username:docker}
I0202 22:58:10.241509 708547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0202 22:58:10.262355 708547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0202 22:58:10.284675 708547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
I0202 22:58:10.305360 708547 provision.go:86] duration metric: configureAuth took 383.923664ms
I0202 22:58:10.305391 708547 ubuntu.go:193] setting minikube options for container-runtime
I0202 22:58:10.305603 708547 config.go:176] Loaded profile config "offline-containerd-20220202225402-591014": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2
I0202 22:58:10.305611 708547 machine.go:91] provisioned docker machine in 758.404395ms
I0202 22:58:10.305620 708547 start.go:267] post-start starting for "offline-containerd-20220202225402-591014" (driver="docker")
I0202 22:58:10.305627 708547 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0202 22:58:10.305668 708547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0202 22:58:10.305707 708547 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220202225402-591014
I0202 22:58:10.343170 708547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49843 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/offline-containerd-20220202225402-591014/id_rsa Username:docker}
I0202 22:58:10.441178 708547 ssh_runner.go:195] Run: cat /etc/os-release
I0202 22:58:10.444541 708547 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0202 22:58:10.444578 708547 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0202 22:58:10.444590 708547 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0202 22:58:10.444597 708547 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0202 22:58:10.444610 708547 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/addons for local assets ...
I0202 22:58:10.444666 708547 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files for local assets ...
I0202 22:58:10.444840 708547 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/5910142.pem -> 5910142.pem in /etc/ssl/certs
I0202 22:58:10.444932 708547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0202 22:58:10.452752 708547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/5910142.pem --> /etc/ssl/certs/5910142.pem (1708 bytes)
I0202 22:58:10.474645 708547 start.go:270] post-start completed in 169.00889ms
I0202 22:58:10.474727 708547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0202 22:58:10.474783 708547 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220202225402-591014
I0202 22:58:10.517190 708547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49843 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/offline-containerd-20220202225402-591014/id_rsa Username:docker}
I0202 22:58:10.613600 708547 fix.go:57] fixHost completed within 3m34.664746405s
I0202 22:58:10.613637 708547 start.go:80] releasing machines lock for "offline-containerd-20220202225402-591014", held for 3m34.664816673s
I0202 22:58:10.613739 708547 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" offline-containerd-20220202225402-591014
I0202 22:58:10.657882 708547 out.go:176] * Found network options:
I0202 22:58:10.659726 708547 out.go:176] - HTTP_PROXY=172.16.1.1:1
W0202 22:58:10.659878 708547 out.go:241] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.67.2).
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.67.2).
I0202 22:58:10.661943 708547 out.go:176] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I0202 22:58:10.662094 708547 ssh_runner.go:195] Run: sudo service crio stop
I0202 22:58:10.662156 708547 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220202225402-591014
I0202 22:58:10.662222 708547 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0202 22:58:10.662289 708547 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220202225402-591014
I0202 22:58:10.700359 708547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49843 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/offline-containerd-20220202225402-591014/id_rsa Username:docker}
I0202 22:58:10.704304 708547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49843 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/offline-containerd-20220202225402-591014/id_rsa Username:docker}
I0202 22:58:11.350863 708547 openrc.go:165] stop output:
I0202 22:58:11.350927 708547 ssh_runner.go:195] Run: sudo service crio status
I0202 22:58:11.377810 708547 docker.go:183] disabling docker service ...
I0202 22:58:11.377877 708547 ssh_runner.go:195] Run: sudo service docker.socket stop
I0202 22:58:11.800064 708547 openrc.go:165] stop output:
** stderr **
Failed to stop docker.socket.service: Unit docker.socket.service not loaded.
** /stderr **
E0202 22:58:11.800094 708547 docker.go:186] "Failed to stop" err=<
sudo service docker.socket stop: Process exited with status 5
stdout:
stderr:
Failed to stop docker.socket.service: Unit docker.socket.service not loaded.
> service="docker.socket"
I0202 22:58:11.800151 708547 ssh_runner.go:195] Run: sudo service docker.service stop
I0202 22:58:12.184665 708547 openrc.go:165] stop output:
** stderr **
Failed to stop docker.service.service: Unit docker.service.service not loaded.
** /stderr **
E0202 22:58:12.184697 708547 docker.go:189] "Failed to stop" err=<
sudo service docker.service stop: Process exited with status 5
stdout:
stderr:
Failed to stop docker.service.service: Unit docker.service.service not loaded.
> service="docker.service"
W0202 22:58:12.184712 708547 cruntime.go:283] disable failed: sudo service docker.service stop: Process exited with status 5
stdout:
stderr:
Failed to stop docker.service.service: Unit docker.service.service not loaded.
I0202 22:58:12.184763 708547 ssh_runner.go:195] Run: sudo service docker status
W0202 22:58:12.202214 708547 containerd.go:244] disableOthers: Docker is still active
I0202 22:58:12.202395 708547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0202 22:58:12.216976 708547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1
fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10
KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9
kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
I0202 22:58:12.233276 708547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0202 22:58:12.240546 708547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0202 22:58:12.248067 708547 ssh_runner.go:195] Run: sudo service containerd restart
I0202 22:58:12.334153 708547 openrc.go:152] restart output:
I0202 22:58:12.334192 708547 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
I0202 22:58:12.334243 708547 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0202 22:58:12.338566 708547 start.go:462] Will wait 60s for crictl version
I0202 22:58:12.338707 708547 ssh_runner.go:195] Run: sudo crictl version
I0202 22:58:12.372613 708547 retry.go:31] will retry after 9.246374971s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:
stderr:
time="2022-02-02T22:58:12Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
I0202 22:58:21.622606 708547 ssh_runner.go:195] Run: sudo crictl version
I0202 22:58:21.646822 708547 start.go:471] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.4.12
RuntimeApiVersion: v1alpha2
I0202 22:58:21.646901 708547 ssh_runner.go:195] Run: containerd --version
I0202 22:58:21.667243 708547 ssh_runner.go:195] Run: containerd --version
I0202 22:58:21.689841 708547 out.go:176] * Preparing Kubernetes v1.23.2 on containerd 1.4.12 ...
I0202 22:58:21.691795 708547 out.go:176] - env HTTP_PROXY=172.16.1.1:1
I0202 22:58:21.691886 708547 cli_runner.go:133] Run: docker network inspect offline-containerd-20220202225402-591014 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0202 22:58:21.724808 708547 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I0202 22:58:21.728508 708547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0202 22:58:21.741451 708547 out.go:176] - kubelet.housekeeping-interval=5m
I0202 22:58:21.743667 708547 out.go:176] - kubelet.cni-conf-dir=/etc/cni/net.mk
I0202 22:58:21.743767 708547 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime containerd
I0202 22:58:21.743843 708547 ssh_runner.go:195] Run: sudo crictl images --output json
I0202 22:58:21.770461 708547 containerd.go:612] all images are preloaded for containerd runtime.
I0202 22:58:21.770488 708547 containerd.go:526] Images already preloaded, skipping extraction
I0202 22:58:21.770530 708547 ssh_runner.go:195] Run: sudo crictl images --output json
I0202 22:58:21.796310 708547 containerd.go:612] all images are preloaded for containerd runtime.
I0202 22:58:21.796338 708547 cache_images.go:84] Images are preloaded, skipping loading
I0202 22:58:21.796398 708547 ssh_runner.go:195] Run: sudo crictl info
I0202 22:58:21.821982 708547 cni.go:93] Creating CNI manager for ""
I0202 22:58:21.822059 708547 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
I0202 22:58:21.822126 708547 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0202 22:58:21.822172 708547 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:offline-containerd-20220202225402-591014 NodeName:offline-containerd-20220202225402-591014 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDrive
r:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0202 22:58:21.822339 708547 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "offline-containerd-20220202225402-591014"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.23.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0202 22:58:21.822453 708547 kubeadm.go:931] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.23.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=offline-containerd-20220202225402-591014 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.23.2 ClusterName:offline-containerd-20220202225402-591014 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0202 22:58:21.822542 708547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2
I0202 22:58:21.830499 708547 binaries.go:44] Found k8s binaries, skipping transfer
I0202 22:58:21.830660 708547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /var/lib/minikube /etc/init.d
I0202 22:58:21.838411 708547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (612 bytes)
I0202 22:58:21.852613 708547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0202 22:58:21.867307 708547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
I0202 22:58:21.881927 708547 ssh_runner.go:362] scp memory --> /var/lib/minikube/openrc-restart-wrapper.sh (233 bytes)
I0202 22:58:21.896509 708547 ssh_runner.go:362] scp memory --> /etc/init.d/kubelet (839 bytes)
I0202 22:58:21.910848 708547 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I0202 22:58:21.914351 708547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0202 22:58:21.925052 708547 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014 for IP: 192.168.67.2
I0202 22:58:21.925202 708547 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.key
I0202 22:58:21.925252 708547 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.key
I0202 22:58:21.925372 708547 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/client.key
I0202 22:58:21.925393 708547 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/client.crt with IP's: []
I0202 22:58:22.040289 708547 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/client.crt ...
I0202 22:58:22.040330 708547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/client.crt: {Name:mkba39a0d10ac091372224bb3446bd004ebdd0a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0202 22:58:22.040578 708547 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/client.key ...
I0202 22:58:22.040606 708547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/client.key: {Name:mkfc834398bf46ddc8475fb1941abeab97f6d751 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0202 22:58:22.040712 708547 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/apiserver.key.c7fa3a9e
I0202 22:58:22.040728 708547 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0202 22:58:22.188810 708547 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/apiserver.crt.c7fa3a9e ...
I0202 22:58:22.188841 708547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/apiserver.crt.c7fa3a9e: {Name:mkc084db94e1b20d81f85d30f74d9015e9d5967d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0202 22:58:22.189026 708547 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/apiserver.key.c7fa3a9e ...
I0202 22:58:22.189045 708547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/apiserver.key.c7fa3a9e: {Name:mk77d240d195334100dd453c9554dc5d2aaec49b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0202 22:58:22.189178 708547 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/apiserver.crt
I0202 22:58:22.189265 708547 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/apiserver.key
I0202 22:58:22.189336 708547 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/proxy-client.key
I0202 22:58:22.189358 708547 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/proxy-client.crt with IP's: []
I0202 22:58:22.254149 708547 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/proxy-client.crt ...
I0202 22:58:22.254183 708547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/proxy-client.crt: {Name:mk2f965efdb06705732765ae19b2ff4be687d263 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0202 22:58:22.254391 708547 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/proxy-client.key ...
I0202 22:58:22.254412 708547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/proxy-client.key: {Name:mkb84b4c9c81de7e15783cde2949c4a3f4395fc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0202 22:58:22.254611 708547 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/591014.pem (1338 bytes)
W0202 22:58:22.254657 708547 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/591014_empty.pem, impossibly tiny 0 bytes
I0202 22:58:22.254665 708547 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem (1675 bytes)
I0202 22:58:22.254687 708547 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem (1078 bytes)
I0202 22:58:22.254719 708547 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem (1123 bytes)
I0202 22:58:22.254742 708547 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem (1675 bytes)
I0202 22:58:22.254780 708547 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/5910142.pem (1708 bytes)
I0202 22:58:22.255873 708547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0202 22:58:22.278000 708547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0202 22:58:22.298663 708547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0202 22:58:22.320160 708547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0202 22:58:22.339488 708547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0202 22:58:22.362492 708547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0202 22:58:22.383735 708547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0202 22:58:22.403675 708547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0202 22:58:22.424497 708547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/5910142.pem --> /usr/share/ca-certificates/5910142.pem (1708 bytes)
I0202 22:58:22.444441 708547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0202 22:58:22.464826 708547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/591014.pem --> /usr/share/ca-certificates/591014.pem (1338 bytes)
I0202 22:58:22.485848 708547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0202 22:58:22.499522 708547 ssh_runner.go:195] Run: openssl version
I0202 22:58:22.505092 708547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5910142.pem && ln -fs /usr/share/ca-certificates/5910142.pem /etc/ssl/certs/5910142.pem"
I0202 22:58:22.513297 708547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5910142.pem
I0202 22:58:22.516815 708547 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 2 21:50 /usr/share/ca-certificates/5910142.pem
I0202 22:58:22.516872 708547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5910142.pem
I0202 22:58:22.521928 708547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5910142.pem /etc/ssl/certs/3ec20f2e.0"
I0202 22:58:22.529894 708547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0202 22:58:22.538216 708547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0202 22:58:22.541643 708547 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 2 21:42 /usr/share/ca-certificates/minikubeCA.pem
I0202 22:58:22.541702 708547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0202 22:58:22.546943 708547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0202 22:58:22.555102 708547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/591014.pem && ln -fs /usr/share/ca-certificates/591014.pem /etc/ssl/certs/591014.pem"
I0202 22:58:22.563175 708547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/591014.pem
I0202 22:58:22.566725 708547 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 2 21:50 /usr/share/ca-certificates/591014.pem
I0202 22:58:22.566786 708547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/591014.pem
I0202 22:58:22.572035 708547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/591014.pem /etc/ssl/certs/51391683.0"
I0202 22:58:22.580275 708547 kubeadm.go:390] StartCluster: {Name:offline-containerd-20220202225402-591014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:offline-containerd-20220202225402-591014 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
I0202 22:58:22.580382 708547 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0202 22:58:22.580430 708547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0202 22:58:22.610393 708547 cri.go:87] found id: ""
I0202 22:58:22.610469 708547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0202 22:58:22.618535 708547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0202 22:58:22.625871 708547 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0202 22:58:22.625934 708547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0202 22:58:22.633368 708547 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0202 22:58:22.633414 708547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0202 22:58:22.955891 708547 out.go:203] - Generating certificates and keys ...
I0202 22:58:26.107700 708547 out.go:203] - Booting up control plane ...
I0202 22:58:34.162811 708547 out.go:203] - Configuring RBAC rules ...
I0202 22:58:34.579655 708547 cni.go:93] Creating CNI manager for ""
I0202 22:58:34.579687 708547 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
I0202 22:58:34.581825 708547 out.go:176] * Configuring CNI (Container Networking Interface) ...
I0202 22:58:34.581907 708547 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0202 22:58:34.589372 708547 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.2/kubectl ...
I0202 22:58:34.589397 708547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0202 22:58:34.604925 708547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0202 22:58:40.553594 708547 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (5.948613406s)
I0202 22:58:40.553658 708547 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0202 22:58:40.554098 708547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 22:58:40.554176 708547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=e7ecaa98a6d1dab5935ea4b7778c6e187f5bde82 minikube.k8s.io/name=offline-containerd-20220202225402-591014 minikube.k8s.io/updated_at=2022_02_02T22_58_40_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0202 22:58:40.585780 708547 ops.go:34] apiserver oom_adj: -16
I0202 22:58:40.679099 708547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 22:58:41.294059 708547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 22:58:41.793161 708547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 22:58:42.293279 708547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 22:58:42.793574 708547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 22:58:43.293987 708547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 22:58:43.793099 708547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 22:58:44.293712 708547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 22:58:44.793360 708547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 22:58:45.294265 708547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 22:58:45.793318 708547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 22:58:46.293748 708547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 22:58:46.793677 708547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 22:58:47.293704 708547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 22:58:47.793889 708547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 22:58:48.293726 708547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 22:58:48.612250 708547 kubeadm.go:1007] duration metric: took 8.058227424s to wait for elevateKubeSystemPrivileges.
I0202 22:58:48.612290 708547 kubeadm.go:392] StartCluster complete in 26.032029474s
I0202 22:58:48.612315 708547 settings.go:142] acquiring lock: {Name:mk7b7d70ff6f69fc29c2978f2ac26aca3df1260d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0202 22:58:48.612434 708547 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
I0202 22:58:48.614257 708547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig: {Name:mk517363bda8f9dbd36a7a8d18db65eef4735455 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0202 22:58:48.615389 708547 kapi.go:59] client config for offline-containerd-20220202225402-591014: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/
profiles/offline-containerd-20220202225402-591014/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x15dae40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0202 22:58:49.442983 708547 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "offline-containerd-20220202225402-591014" rescaled to 1
I0202 22:58:49.443060 708547 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0202 22:58:49.443127 708547 addons.go:415] enableAddons start: toEnable=map[], additional=[]
I0202 22:58:49.627550 708547 out.go:176] * Verifying Kubernetes components...
I0202 22:58:49.627624 708547 addons.go:65] Setting storage-provisioner=true in profile "offline-containerd-20220202225402-591014"
I0202 22:58:49.627645 708547 ssh_runner.go:195] Run: sudo service kubelet status
I0202 22:58:49.627648 708547 addons.go:153] Setting addon storage-provisioner=true in "offline-containerd-20220202225402-591014"
I0202 22:58:49.443365 708547 config.go:176] Loaded profile config "offline-containerd-20220202225402-591014": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2
I0202 22:58:49.627661 708547 addons.go:65] Setting default-storageclass=true in profile "offline-containerd-20220202225402-591014"
I0202 22:58:49.627677 708547 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "offline-containerd-20220202225402-591014"
I0202 22:58:49.443398 708547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
W0202 22:58:49.627653 708547 addons.go:165] addon storage-provisioner should already be in state true
I0202 22:58:49.627757 708547 host.go:66] Checking if "offline-containerd-20220202225402-591014" exists ...
I0202 22:58:49.628241 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:58:49.663035 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:58:49.690861 708547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.67.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0202 22:58:49.691348 708547 kapi.go:59] client config for offline-containerd-20220202225402-591014: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/
profiles/offline-containerd-20220202225402-591014/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x15dae40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0202 22:58:49.702804 708547 kapi.go:59] client config for offline-containerd-20220202225402-591014: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/offline-containerd-20220202225402-591014/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/
profiles/offline-containerd-20220202225402-591014/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x15dae40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0202 22:58:49.798888 708547 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0202 22:58:49.799427 708547 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0202 22:58:49.799449 708547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0202 22:58:49.799510 708547 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220202225402-591014
I0202 22:58:49.800843 708547 node_ready.go:35] waiting up to 6m0s for node "offline-containerd-20220202225402-591014" to be "Ready" ...
I0202 22:58:49.805765 708547 addons.go:153] Setting addon default-storageclass=true in "offline-containerd-20220202225402-591014"
W0202 22:58:49.805799 708547 addons.go:165] addon default-storageclass should already be in state true
I0202 22:58:49.805837 708547 host.go:66] Checking if "offline-containerd-20220202225402-591014" exists ...
I0202 22:58:49.806415 708547 cli_runner.go:133] Run: docker container inspect offline-containerd-20220202225402-591014 --format={{.State.Status}}
I0202 22:58:49.851047 708547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49843 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/offline-containerd-20220202225402-591014/id_rsa Username:docker}
I0202 22:58:49.858662 708547 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
I0202 22:58:49.858700 708547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0202 22:58:49.858762 708547 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220202225402-591014
I0202 22:58:49.900149 708547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49843 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/offline-containerd-20220202225402-591014/id_rsa Username:docker}
I0202 22:58:49.986386 708547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0202 22:58:50.029014 708547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0202 22:58:50.134738 708547 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
I0202 22:58:51.030142 708547 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
I0202 22:58:51.030206 708547 addons.go:417] enableAddons completed in 1.587089875s
I0202 22:58:51.808709 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:58:53.808891 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:58:55.809453 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:58:57.809661 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:00.309507 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:02.310092 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:04.810084 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:07.310715 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:09.811118 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:12.309614 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:14.309868 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:16.809233 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:18.809659 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:21.310193 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:23.811606 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:26.310283 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:28.809692 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:30.811215 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:33.308616 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:35.309069 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:37.809989 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:40.309652 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:42.810058 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:45.308923 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:47.309366 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:49.309580 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:51.809452 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:53.809561 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:55.809887 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 22:59:58.309253 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:00.309641 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:02.317296 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:04.809183 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:06.809821 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:09.308869 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:11.308943 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:13.310329 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:15.809523 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:18.309012 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:20.309629 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:22.809812 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:25.308935 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:27.309359 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:29.309581 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:31.810133 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:34.309976 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:36.310027 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:38.377898 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:40.810083 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:42.810261 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:45.308817 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:47.808940 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:49.809061 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:51.809384 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:54.309970 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:56.809758 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:00:59.308809 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:01.311589 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:03.810277 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:06.309586 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:08.809395 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:10.809700 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:12.809887 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:15.309878 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:17.809233 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:19.810235 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:22.309238 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:24.310371 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:26.810306 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:29.309760 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:31.808748 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:33.810098 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:36.309093 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:38.309933 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:40.808908 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:43.765025 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:45.809005 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:47.810299 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:50.310244 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:52.810286 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:55.309168 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:01:57.809361 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:00.314674 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:02.808673 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:04.810479 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:07.311518 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:09.312680 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:11.809659 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:14.309899 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:16.809189 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:18.809524 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:21.309409 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:23.310006 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:25.809619 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:27.810529 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:30.310097 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:32.310780 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:34.810139 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:36.813347 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:39.316620 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:41.808866 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:43.810202 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:46.309940 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:48.808980 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:49.812870 708547 node_ready.go:38] duration metric: took 4m0.01198768s waiting for node "offline-containerd-20220202225402-591014" to be "Ready" ...
I0202 23:02:49.816319 708547 out.go:176]
W0202 23:02:49.816567 708547 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
W0202 23:02:49.816584 708547 out.go:241] *
*
W0202 23:02:49.817580 708547 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0202 23:02:49.819837 708547 out.go:176]
** /stderr **
aab_offline_test.go:59: out/minikube-linux-amd64 start -p offline-containerd-20220202225402-591014 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker --container-runtime=containerd failed: exit status 80
panic.go:642: *** TestOffline FAILED at 2022-02-02 23:02:49.868598443 +0000 UTC m=+4853.778378964
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======> post-mortem[TestOffline]: docker inspect <======
helpers_test.go:232: (dbg) Run: docker inspect offline-containerd-20220202225402-591014
helpers_test.go:236: (dbg) docker inspect offline-containerd-20220202225402-591014:
-- stdout --
[
{
"Id": "a2d1fe9864f2509c06675ec3440db0cfc74fc472fa164acd8a67d95a58609386",
"Created": "2022-02-02T22:58:07.039249519Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 717443,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-02-02T22:58:07.585498441Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
"ResolvConfPath": "/var/lib/docker/containers/a2d1fe9864f2509c06675ec3440db0cfc74fc472fa164acd8a67d95a58609386/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/a2d1fe9864f2509c06675ec3440db0cfc74fc472fa164acd8a67d95a58609386/hostname",
"HostsPath": "/var/lib/docker/containers/a2d1fe9864f2509c06675ec3440db0cfc74fc472fa164acd8a67d95a58609386/hosts",
"LogPath": "/var/lib/docker/containers/a2d1fe9864f2509c06675ec3440db0cfc74fc472fa164acd8a67d95a58609386/a2d1fe9864f2509c06675ec3440db0cfc74fc472fa164acd8a67d95a58609386-json.log",
"Name": "/offline-containerd-20220202225402-591014",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"offline-containerd-20220202225402-591014:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "offline-containerd-20220202225402-591014",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [
{
"PathOnHost": "/dev/fuse",
"PathInContainer": "/dev/fuse",
"CgroupPermissions": "rwm"
}
],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/5d9c9390c524069471512f994f9407986066a772f343630c86b1b98f355aa38d-init/diff:/var/lib/docker/overlay2/c534a67c1f4c5b9f1d9e8fda7874029a0b48633454e02a0b6c810aa9cf70f36b/diff:/var/lib/docker/overlay2/de7a9fc119b7ade7c9f3b24bea5108d56a2268af67d230db0cccbbf8190cc502/diff:/var/lib/docker/overlay2/7dde3b10db81d42c3216201bd8676123e0e230bfa173b31e053af8fa206a800e/diff:/var/lib/docker/overlay2/97454ec819da74666dcc72f8e5b4653886c44f3699cecf13a018ecf95d935273/diff:/var/lib/docker/overlay2/f63f64cf22a9591cbcafeb284a265882ea890544f2b083b747047bcf47a45f11/diff:/var/lib/docker/overlay2/85a2123491b64d14f7493b3bc6a519d9df28cc3680976b9e5a7ecf783be5a304/diff:/var/lib/docker/overlay2/2ec15009b968007c3be12325edca4c12821b52fc616ace66d456038c7f9b8b26/diff:/var/lib/docker/overlay2/c45e9285ae7064da0be5f07d29b035ad54f64e529db00ffe6f2dce6ad543dccf/diff:/var/lib/docker/overlay2/d95000893ca52351a7804b110df43c1656b4bc7fbe4940c9e0ff2744233b1f93/diff:/var/lib/docker/overlay2/38e3bf
07bc49f88e1e7214644d31095a6178f5156c6a5ff3fbcaf8676336a52d/diff:/var/lib/docker/overlay2/fe45a9a5036ea98542c5fd7808e66f8b6ffe9d0c545de700f7cbe39d6280e0e3/diff:/var/lib/docker/overlay2/8cd90b043b30c66cbeacbe4ca21881d85717c6e16f1fe627e2eecdeaf461da6c/diff:/var/lib/docker/overlay2/7221b66a068ae65d9a1c089fe5c60e7230aa950b3d47eb7e90a1776c2a45931b/diff:/var/lib/docker/overlay2/86ce10ecfb90cfd8e1f7d722715353b823ddfa58b93eefa2c1efffff899916ee/diff:/var/lib/docker/overlay2/12ea1316fade88c6e1d2272d6dbeb629875ff96f4dd6f6ead0ff6a4e7e3a3067/diff:/var/lib/docker/overlay2/0fe276873166ac11302901f1bb2c547c71e23fac2197e717e789b6ce7d1eb3ce/diff:/var/lib/docker/overlay2/81bd863acb94ff576a1f1f046ee87693930a12209e915a09f2a720c77d1357f3/diff:/var/lib/docker/overlay2/8875d87537d51e137e2fbff41fe61dadeefa46057e0c93d257ec4754b5ba95f3/diff:/var/lib/docker/overlay2/86e75ae4c708ad2e78d21326ca3b7f552b7903d8c7a0a66e086feaacd3a8c002/diff:/var/lib/docker/overlay2/c688ba4459332409ab260296b96d566d186d307b81308fb3b59724fe6fed3c7d/diff:/var/lib/d
ocker/overlay2/45c4b946a887c9de4f592591b3e65041083baba75d5bed9b0995339de1cafd02/diff:/var/lib/docker/overlay2/3927af3e4dc5e4af94ac76cbe5a3b9759b80e10646aabc655f5c9b56ae13e650/diff:/var/lib/docker/overlay2/11dd3efafe062edcbd4dd837545d570514492786b08abf77f57fe300d225eaa5/diff:/var/lib/docker/overlay2/b8e0f4d5369eb0c633092d9e21501d99ccb27fafaad6667209e1d988c337ae9b/diff:/var/lib/docker/overlay2/c605476b22b48b266163ca6ee5fcb52b15458f1d7a77458390109b6e795f72f3/diff:/var/lib/docker/overlay2/19b31873fbc8858f6b143ea8dc98572f32110c70c29732285bdea89dd6163287/diff:/var/lib/docker/overlay2/171afcaaccc199dc3a653d1bf42453959e7c6bd7bc1e8340f94454b5f5fb6d2d/diff:/var/lib/docker/overlay2/2000b0fa8d671b6e34ec28228440eb7e6a5ddcd6b2cb657836d8522e3fb4454e/diff:/var/lib/docker/overlay2/1294bf56384681b45b611fe07d7b32c6c87a67eee568452d1f8123bf319f7a4c/diff:/var/lib/docker/overlay2/3199bb762e90968c1998250099f781c0e8954f4d62df39a80a384b9208149421/diff:/var/lib/docker/overlay2/036a54cd701c84ee783285b30f6cf33b92858f1a11ebd5db15cffffea12
adcd9/diff:/var/lib/docker/overlay2/552a166f9d1f0d79a7c45c08db35c6db9366e5d8e1321b62ffc3d8a9ea458797/diff:/var/lib/docker/overlay2/98f34e7af675026bd6dd9aaf43b996b5a992e04fe33f238d16d38e7ccad2fb91/diff:/var/lib/docker/overlay2/c1ed97ca4514bd37713c82daa6ba805285f12b5aa51599e9f915b8755fbf55a7/diff:/var/lib/docker/overlay2/86b8ba00550442fc509be298ff07205e99a171d56a70a2e1735cdfd4a9ab745e/diff:/var/lib/docker/overlay2/109092c1424dfee38a4eb63e39a9bb5d3de4b011f84e30bd021bf73ee2f47aa1/diff:/var/lib/docker/overlay2/87fd60ac9e5dd5a701def2c0a9e1c64463bd4779be06a990c86e5561a4349312/diff:/var/lib/docker/overlay2/145545da7fd578da8a41adec4cec3a9b347b4e609c900ee9365ae7735c7859c6/diff:/var/lib/docker/overlay2/e6a20d33de71269404b7fdc1fa6f915dcbc5a337e0625ef4536cb942bb55c19b/diff:/var/lib/docker/overlay2/38829872ccac03d0dae45b93d1f34e61fd179b6fc19f22fd27d4431ee773090f/diff:/var/lib/docker/overlay2/3c7fa7cc3994b36d6b984ebe7c46b535829f02814130486c4ecf5a0976f4f7dc/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
"MergedDir": "/var/lib/docker/overlay2/5d9c9390c524069471512f994f9407986066a772f343630c86b1b98f355aa38d/merged",
"UpperDir": "/var/lib/docker/overlay2/5d9c9390c524069471512f994f9407986066a772f343630c86b1b98f355aa38d/diff",
"WorkDir": "/var/lib/docker/overlay2/5d9c9390c524069471512f994f9407986066a772f343630c86b1b98f355aa38d/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "offline-containerd-20220202225402-591014",
"Source": "/var/lib/docker/volumes/offline-containerd-20220202225402-591014/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "offline-containerd-20220202225402-591014",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "offline-containerd-20220202225402-591014",
"name.minikube.sigs.k8s.io": "offline-containerd-20220202225402-591014",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "ddb1adad2b029b638556f00c0cc95235d52fcc68e333c468b290f91ee9d2f4e7",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49843"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49842"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49839"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49841"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49840"
}
]
},
"SandboxKey": "/var/run/docker/netns/ddb1adad2b02",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"offline-containerd-20220202225402-591014": {
"IPAMConfig": {
"IPv4Address": "192.168.67.2"
},
"Links": null,
"Aliases": [
"a2d1fe9864f2",
"offline-containerd-20220202225402-591014"
],
"NetworkID": "aa70d7cdfafdd614759e18cac3876de152c0b5419be8fe1de10240124584c8e8",
"EndpointID": "ff7e8e25452d48ade3f1fd52783ae3ad56744d9db4c2cdb9fd333ec9be9b8e86",
"Gateway": "192.168.67.1",
"IPAddress": "192.168.67.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:43:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p offline-containerd-20220202225402-591014 -n offline-containerd-20220202225402-591014
helpers_test.go:245: <<< TestOffline FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======> post-mortem[TestOffline]: minikube logs <======
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p offline-containerd-20220202225402-591014 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p offline-containerd-20220202225402-591014 logs -n 25: (1.132272254s)
helpers_test.go:253: TestOffline logs:
-- stdout --
*
* ==> Audit <==
* |---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
| delete | -p false-20220202225403-591014 | false-20220202225403-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 22:54:03 UTC | Wed, 02 Feb 2022 22:54:04 UTC |
| start | -p | cert-expiration-20220202225404-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 22:54:04 UTC | Wed, 02 Feb 2022 22:55:09 UTC |
| | cert-expiration-20220202225404-591014 | | | | | |
| | --memory=2048 --cert-expiration=3m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p | cert-expiration-20220202225404-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 22:58:09 UTC | Wed, 02 Feb 2022 22:58:25 UTC |
| | cert-expiration-20220202225404-591014 | | | | | |
| | --memory=2048 --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | cert-expiration-20220202225404-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 22:58:25 UTC | Wed, 02 Feb 2022 22:58:28 UTC |
| | cert-expiration-20220202225404-591014 | | | | | |
| start | -p | force-systemd-env-20220202225402-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 22:54:02 UTC | Wed, 02 Feb 2022 22:58:42 UTC |
| | force-systemd-env-20220202225402-591014 | | | | | |
| | --memory=2048 --alsologtostderr | | | | | |
| | -v=5 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p | NoKubernetes-20220202225402-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 22:54:03 UTC | Wed, 02 Feb 2022 22:58:42 UTC |
| | NoKubernetes-20220202225402-591014 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| -p | force-systemd-env-20220202225402-591014 | force-systemd-env-20220202225402-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 22:58:42 UTC | Wed, 02 Feb 2022 22:58:42 UTC |
| | ssh cat /etc/containerd/config.toml | | | | | |
| delete | -p | force-systemd-env-20220202225402-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 22:58:42 UTC | Wed, 02 Feb 2022 22:58:45 UTC |
| | force-systemd-env-20220202225402-591014 | | | | | |
| start | -p | NoKubernetes-20220202225402-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 22:58:42 UTC | Wed, 02 Feb 2022 22:58:56 UTC |
| | NoKubernetes-20220202225402-591014 | | | | | |
| | --no-kubernetes --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | NoKubernetes-20220202225402-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 22:58:57 UTC | Wed, 02 Feb 2022 22:58:59 UTC |
| | NoKubernetes-20220202225402-591014 | | | | | |
| delete | -p | NoKubernetes-20220202225402-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 22:59:03 UTC | Wed, 02 Feb 2022 22:59:09 UTC |
| | NoKubernetes-20220202225402-591014 | | | | | |
| start | -p | force-systemd-flag-20220202225828-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 22:58:28 UTC | Wed, 02 Feb 2022 22:59:17 UTC |
| | force-systemd-flag-20220202225828-591014 | | | | | |
| | --memory=2048 --force-systemd | | | | | |
| | --alsologtostderr -v=5 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| -p | force-systemd-flag-20220202225828-591014 | force-systemd-flag-20220202225828-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 22:59:17 UTC | Wed, 02 Feb 2022 22:59:17 UTC |
| | ssh cat /etc/containerd/config.toml | | | | | |
| delete | -p | force-systemd-flag-20220202225828-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 22:59:17 UTC | Wed, 02 Feb 2022 22:59:20 UTC |
| | force-systemd-flag-20220202225828-591014 | | | | | |
| start | -p | cert-options-20220202225845-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 22:58:45 UTC | Wed, 02 Feb 2022 22:59:22 UTC |
| | cert-options-20220202225845-591014 | | | | | |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| -p | cert-options-20220202225845-591014 | cert-options-20220202225845-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 22:59:22 UTC | Wed, 02 Feb 2022 22:59:22 UTC |
| | ssh openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p | cert-options-20220202225845-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 22:59:22 UTC | Wed, 02 Feb 2022 22:59:23 UTC |
| | cert-options-20220202225845-591014 | | | | | |
| | -- sudo cat | | | | | |
| | /etc/kubernetes/admin.conf | | | | | |
| delete | -p | cert-options-20220202225845-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 22:59:23 UTC | Wed, 02 Feb 2022 22:59:27 UTC |
| | cert-options-20220202225845-591014 | | | | | |
| start | -p | kubernetes-upgrade-20220202225927-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 22:59:27 UTC | Wed, 02 Feb 2022 23:00:35 UTC |
| | kubernetes-upgrade-20220202225927-591014 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --alsologtostderr -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| stop | -p | kubernetes-upgrade-20220202225927-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 23:00:35 UTC | Wed, 02 Feb 2022 23:01:00 UTC |
| | kubernetes-upgrade-20220202225927-591014 | | | | | |
| start | -p | stopped-upgrade-20220202225909-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 23:00:06 UTC | Wed, 02 Feb 2022 23:01:01 UTC |
| | stopped-upgrade-20220202225909-591014 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| logs | -p | stopped-upgrade-20220202225909-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 23:01:01 UTC | Wed, 02 Feb 2022 23:01:02 UTC |
| | stopped-upgrade-20220202225909-591014 | | | | | |
| delete | -p | stopped-upgrade-20220202225909-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 23:01:02 UTC | Wed, 02 Feb 2022 23:01:05 UTC |
| | stopped-upgrade-20220202225909-591014 | | | | | |
| start | -p | missing-upgrade-20220202225920-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 23:00:40 UTC | Wed, 02 Feb 2022 23:01:47 UTC |
| | missing-upgrade-20220202225920-591014 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | missing-upgrade-20220202225920-591014 | jenkins | v1.25.1 | Wed, 02 Feb 2022 23:01:48 UTC | Wed, 02 Feb 2022 23:01:51 UTC |
| | missing-upgrade-20220202225920-591014 | | | | | |
|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
*
* ==> Last Start <==
* Log file created at: 2022/02/02 23:02:08
Running on machine: ubuntu-20-agent-4
Binary: Built with gc go1.17.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0202 23:02:08.882369 758020 out.go:297] Setting OutFile to fd 1 ...
I0202 23:02:08.882886 758020 out.go:344] TERM=,COLORTERM=, which probably does not support color
I0202 23:02:08.882907 758020 out.go:310] Setting ErrFile to fd 2...
I0202 23:02:08.882967 758020 out.go:344] TERM=,COLORTERM=, which probably does not support color
I0202 23:02:08.883228 758020 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/bin
I0202 23:02:08.883684 758020 out.go:304] Setting JSON to false
I0202 23:02:08.886033 758020 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":24280,"bootTime":1643818649,"procs":687,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0202 23:02:08.886136 758020 start.go:122] virtualization: kvm guest
I0202 23:02:08.888899 758020 out.go:176] * [running-upgrade-20220202230105-591014] minikube v1.25.1 on Ubuntu 20.04 (kvm/amd64)
I0202 23:02:08.891011 758020 out.go:176] - MINIKUBE_LOCATION=13251
I0202 23:02:08.889103 758020 notify.go:174] Checking for updates...
I0202 23:02:08.892796 758020 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0202 23:02:08.894584 758020 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
I0202 23:02:08.896429 758020 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube
I0202 23:02:08.898452 758020 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64
I0202 23:02:08.899080 758020 config.go:176] Loaded profile config "running-upgrade-20220202230105-591014": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0202 23:02:08.901826 758020 out.go:176] * Kubernetes 1.23.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.2
I0202 23:02:08.901888 758020 driver.go:344] Setting default libvirt URI to qemu:///system
I0202 23:02:08.948642 758020 docker.go:132] docker version: linux-20.10.12
I0202 23:02:08.948778 758020 cli_runner.go:133] Run: docker system info --format "{{json .}}"
I0202 23:02:09.053439 758020 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:true NGoroutines:64 SystemTime:2022-02-02 23:02:08.982960839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
I0202 23:02:09.053553 758020 docker.go:237] overlay module found
I0202 23:02:09.056511 758020 out.go:176] * Using the docker driver based on existing profile
I0202 23:02:09.056547 758020 start.go:281] selected driver: docker
I0202 23:02:09.056555 758020 start.go:798] validating driver "docker" against &{Name:running-upgrade-20220202230105-591014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:running-upgrade-20220202230105-591014 Namespace:default APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.167 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false}
I0202 23:02:09.056694 758020 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
W0202 23:02:09.056741 758020 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W0202 23:02:09.056771 758020 out.go:241] ! Your cgroup does not allow setting memory.
I0202 23:02:09.058888 758020 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
I0202 23:02:09.059617 758020 cli_runner.go:133] Run: docker system info --format "{{json .}}"
I0202 23:02:09.169592 758020 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:true NGoroutines:64 SystemTime:2022-02-02 23:02:09.096268369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1029-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33663639552 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
W0202 23:02:09.169790 758020 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W0202 23:02:09.169823 758020 out.go:241] ! Your cgroup does not allow setting memory.
I0202 23:02:09.172522 758020 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
I0202 23:02:09.172655 758020 cni.go:93] Creating CNI manager for ""
I0202 23:02:09.172674 758020 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
I0202 23:02:09.172688 758020 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I0202 23:02:09.172699 758020 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I0202 23:02:09.172712 758020 start_flags.go:302] config:
{Name:running-upgrade-20220202230105-591014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:running-upgrade-20220202230105-591014 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.167 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false}
I0202 23:02:09.174909 758020 out.go:176] * Starting control plane node running-upgrade-20220202230105-591014 in cluster running-upgrade-20220202230105-591014
I0202 23:02:09.174965 758020 cache.go:120] Beginning downloading kic base image for docker with containerd
I0202 23:02:09.176750 758020 out.go:176] * Pulling base image ...
I0202 23:02:09.176899 758020 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0202 23:02:09.176992 758020 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.20.0-containerd-overlay2-amd64.tar.lz4
I0202 23:02:09.177030 758020 cache.go:57] Caching tarball of preloaded images
I0202 23:02:09.177024 758020 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 in local docker daemon
I0202 23:02:09.177484 758020 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.20.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0202 23:02:09.177515 758020 cache.go:60] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I0202 23:02:09.177716 758020 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/running-upgrade-20220202230105-591014/config.json ...
I0202 23:02:09.232764 758020 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 in local docker daemon, skipping pull
I0202 23:02:09.232799 758020 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 exists in daemon, skipping load
I0202 23:02:09.232820 758020 cache.go:208] Successfully downloaded all kic artifacts
I0202 23:02:09.232860 758020 start.go:313] acquiring machines lock for running-upgrade-20220202230105-591014: {Name:mk76c77930c71a231408f0996666e214d8e4296c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0202 23:02:09.232981 758020 start.go:317] acquired machines lock for "running-upgrade-20220202230105-591014" in 91.38µs
I0202 23:02:09.233009 758020 start.go:93] Skipping create...Using existing machine configuration
I0202 23:02:09.233016 758020 fix.go:55] fixHost starting:
I0202 23:02:09.233316 758020 cli_runner.go:133] Run: docker container inspect running-upgrade-20220202230105-591014 --format={{.State.Status}}
I0202 23:02:09.273961 758020 fix.go:108] recreateIfNeeded on running-upgrade-20220202230105-591014: state=Running err=<nil>
W0202 23:02:09.274003 758020 fix.go:134] unexpected machine state, will restart: <nil>
I0202 23:02:06.198016 748668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0202 23:02:06.697799 748668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0202 23:02:07.198607 748668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0202 23:02:07.698427 748668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0202 23:02:08.197825 748668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0202 23:02:08.697926 748668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0202 23:02:09.198112 748668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0202 23:02:09.698373 748668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0202 23:02:10.198039 748668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0202 23:02:10.698058 748668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0202 23:02:09.312680 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:11.809659 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:09.277069 758020 out.go:176] * Updating the running docker "running-upgrade-20220202230105-591014" container ...
I0202 23:02:09.277114 758020 machine.go:88] provisioning docker machine ...
I0202 23:02:09.277141 758020 ubuntu.go:169] provisioning hostname "running-upgrade-20220202230105-591014"
I0202 23:02:09.277202 758020 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220202230105-591014
I0202 23:02:09.318544 758020 main.go:130] libmachine: Using SSH client type: native
I0202 23:02:09.318806 758020 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil> [] 0s} 127.0.0.1 49898 <nil> <nil>}
I0202 23:02:09.318831 758020 main.go:130] libmachine: About to run SSH command:
sudo hostname running-upgrade-20220202230105-591014 && echo "running-upgrade-20220202230105-591014" | sudo tee /etc/hostname
I0202 23:02:09.459871 758020 main.go:130] libmachine: SSH cmd err, output: <nil>: running-upgrade-20220202230105-591014
I0202 23:02:09.459953 758020 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220202230105-591014
I0202 23:02:09.495387 758020 main.go:130] libmachine: Using SSH client type: native
I0202 23:02:09.495586 758020 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0d20] 0x7a3e00 <nil> [] 0s} 127.0.0.1 49898 <nil> <nil>}
I0202 23:02:09.495612 758020 main.go:130] libmachine: About to run SSH command:
if ! grep -xq '.*\srunning-upgrade-20220202230105-591014' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-20220202230105-591014/g' /etc/hosts;
else
echo '127.0.1.1 running-upgrade-20220202230105-591014' | sudo tee -a /etc/hosts;
fi
fi
I0202 23:02:09.629803 758020 main.go:130] libmachine: SSH cmd err, output: <nil>:
I0202 23:02:09.629843 758020 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube}
I0202 23:02:09.629881 758020 ubuntu.go:177] setting up certificates
I0202 23:02:09.629895 758020 provision.go:83] configureAuth start
I0202 23:02:09.629969 758020 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-20220202230105-591014
I0202 23:02:09.670237 758020 provision.go:138] copyHostCerts
I0202 23:02:09.670312 758020 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem, removing ...
I0202 23:02:09.670320 758020 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem
I0202 23:02:09.670391 758020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.pem (1078 bytes)
I0202 23:02:09.670472 758020 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem, removing ...
I0202 23:02:09.670487 758020 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem
I0202 23:02:09.670510 758020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cert.pem (1123 bytes)
I0202 23:02:09.670559 758020 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem, removing ...
I0202 23:02:09.670568 758020 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem
I0202 23:02:09.670587 758020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/key.pem (1675 bytes)
I0202 23:02:09.670640 758020 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-20220202230105-591014 san=[192.168.59.167 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-20220202230105-591014]
I0202 23:02:09.771216 758020 provision.go:172] copyRemoteCerts
I0202 23:02:09.771294 758020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0202 23:02:09.771353 758020 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220202230105-591014
I0202 23:02:09.804442 758020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49898 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/running-upgrade-20220202230105-591014/id_rsa Username:docker}
I0202 23:02:09.900177 758020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0202 23:02:09.920762 758020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
I0202 23:02:09.942305 758020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0202 23:02:09.963622 758020 provision.go:86] duration metric: configureAuth took 333.698157ms
I0202 23:02:09.963666 758020 ubuntu.go:193] setting minikube options for container-runtime
I0202 23:02:09.963895 758020 config.go:176] Loaded profile config "running-upgrade-20220202230105-591014": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0202 23:02:09.963916 758020 machine.go:91] provisioned docker machine in 686.795402ms
I0202 23:02:09.963926 758020 start.go:267] post-start starting for "running-upgrade-20220202230105-591014" (driver="docker")
I0202 23:02:09.963939 758020 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0202 23:02:09.963998 758020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0202 23:02:09.964046 758020 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220202230105-591014
I0202 23:02:09.999520 758020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49898 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/running-upgrade-20220202230105-591014/id_rsa Username:docker}
I0202 23:02:10.092804 758020 ssh_runner.go:195] Run: cat /etc/os-release
I0202 23:02:10.095825 758020 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0202 23:02:10.095851 758020 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0202 23:02:10.095859 758020 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0202 23:02:10.095865 758020 info.go:137] Remote host: Ubuntu 20.04.1 LTS
I0202 23:02:10.095876 758020 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/addons for local assets ...
I0202 23:02:10.095931 758020 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files for local assets ...
I0202 23:02:10.095997 758020 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/5910142.pem -> 5910142.pem in /etc/ssl/certs
I0202 23:02:10.096069 758020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0202 23:02:10.103307 758020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/5910142.pem --> /etc/ssl/certs/5910142.pem (1708 bytes)
I0202 23:02:10.125710 758020 start.go:270] post-start completed in 161.760007ms
I0202 23:02:10.125790 758020 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0202 23:02:10.125840 758020 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220202230105-591014
I0202 23:02:10.164595 758020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49898 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/running-upgrade-20220202230105-591014/id_rsa Username:docker}
I0202 23:02:10.254886 758020 fix.go:57] fixHost completed within 1.021862466s
I0202 23:02:10.254922 758020 start.go:80] releasing machines lock for "running-upgrade-20220202230105-591014", held for 1.021925972s
I0202 23:02:10.255020 758020 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-20220202230105-591014
I0202 23:02:10.291357 758020 ssh_runner.go:195] Run: systemctl --version
I0202 23:02:10.291410 758020 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0202 23:02:10.291415 758020 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220202230105-591014
I0202 23:02:10.291483 758020 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220202230105-591014
I0202 23:02:10.333505 758020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49898 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/running-upgrade-20220202230105-591014/id_rsa Username:docker}
I0202 23:02:10.337661 758020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49898 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/machines/running-upgrade-20220202230105-591014/id_rsa Username:docker}
I0202 23:02:10.452457 758020 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0202 23:02:10.465985 758020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0202 23:02:10.478505 758020 docker.go:183] disabling docker service ...
I0202 23:02:10.478573 758020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0202 23:02:10.500306 758020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0202 23:02:10.510993 758020 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0202 23:02:10.605188 758020 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0202 23:02:10.699692 758020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0202 23:02:10.710436 758020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0202 23:02:10.741836 758020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My4yIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
I0202 23:02:10.759335 758020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0202 23:02:10.766819 758020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0202 23:02:10.774059 758020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0202 23:02:10.868430 758020 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0202 23:02:10.979844 758020 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
I0202 23:02:10.979906 758020 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0202 23:02:10.984288 758020 start.go:462] Will wait 60s for crictl version
I0202 23:02:10.984454 758020 ssh_runner.go:195] Run: sudo crictl version
I0202 23:02:11.020443 758020 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:
stderr:
time="2022-02-02T23:02:11Z" level=fatal msg="getting the runtime version failed: rpc error: code = Unknown desc = server is not initialized yet"
I0202 23:02:11.197805 748668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0202 23:02:11.218960 748668 api_server.go:71] duration metric: took 15.536660696s to wait for apiserver process to appear ...
I0202 23:02:11.218996 748668 api_server.go:87] waiting for apiserver healthz status ...
I0202 23:02:11.219017 748668 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0202 23:02:13.346491 748668 api_server.go:266] https://192.168.49.2:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0202 23:02:13.346528 748668 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0202 23:02:13.847269 748668 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0202 23:02:13.853431 748668 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0202 23:02:13.853460 748668 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0202 23:02:14.346988 748668 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0202 23:02:14.352712 748668 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0202 23:02:14.352738 748668 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0202 23:02:14.847039 748668 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0202 23:02:14.857705 748668 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
ok
I0202 23:02:14.865831 748668 api_server.go:140] control plane version: v1.23.3-rc.0
I0202 23:02:14.865864 748668 api_server.go:130] duration metric: took 3.646859214s to wait for apiserver health ...
I0202 23:02:14.865875 748668 cni.go:93] Creating CNI manager for ""
I0202 23:02:14.865883 748668 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
I0202 23:02:14.869063 748668 out.go:176] * Configuring CNI (Container Networking Interface) ...
I0202 23:02:14.869141 748668 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0202 23:02:14.874470 748668 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3-rc.0/kubectl ...
I0202 23:02:14.874498 748668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0202 23:02:14.892847 748668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0202 23:02:14.309899 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:16.809189 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:18.898404 754752 ssh_runner.go:195] Run: sudo crictl version
I0202 23:02:18.938894 754752 start.go:471] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.4.12
RuntimeApiVersion: v1alpha2
I0202 23:02:18.938949 754752 ssh_runner.go:195] Run: containerd --version
I0202 23:02:18.962962 754752 ssh_runner.go:195] Run: containerd --version
I0202 23:02:18.990991 754752 out.go:176] * Preparing Kubernetes v1.23.2 on containerd 1.4.12 ...
I0202 23:02:18.991099 754752 cli_runner.go:133] Run: docker network inspect pause-20220202230153-591014 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0202 23:02:19.034744 754752 ssh_runner.go:195] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts
I0202 23:02:19.039103 754752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0202 23:02:19.054494 754752 out.go:176] - kubelet.housekeeping-interval=5m
I0202 23:02:15.785853 748668 system_pods.go:43] waiting for kube-system pods to appear ...
I0202 23:02:15.793863 748668 system_pods.go:59] 4 kube-system pods found
I0202 23:02:15.793903 748668 system_pods.go:61] "coredns-5644d7b6d9-cdg4d" [2ed5c0f9-22b0-42e8-a44d-dfc7784c81c0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
I0202 23:02:15.793912 748668 system_pods.go:61] "kindnet-p7ph2" [bfe669bb-6882-4c82-8ea8-96ba064cb8d2] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0202 23:02:15.793919 748668 system_pods.go:61] "kube-proxy-sxl4m" [56e24ec5-4d06-4e97-ae3b-239d3f15ea61] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0202 23:02:15.793926 748668 system_pods.go:61] "storage-provisioner" [8be03f4f-aa76-4147-a805-0dd670c52df9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
I0202 23:02:15.793934 748668 system_pods.go:74] duration metric: took 8.054225ms to wait for pod list to return data ...
I0202 23:02:15.793945 748668 node_conditions.go:102] verifying NodePressure condition ...
I0202 23:02:15.797391 748668 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
I0202 23:02:15.797480 748668 node_conditions.go:123] node cpu capacity is 8
I0202 23:02:15.797503 748668 node_conditions.go:105] duration metric: took 3.553346ms to run NodePressure ...
I0202 23:02:15.797522 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:15.931034 748668 retry.go:31] will retry after 116.456µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
[addons] Migrating CoreDNS Corefile
stderr:
W0202 23:02:15.881149 1688 warnings.go:70] spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:15.932243 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:16.044557 748668 retry.go:31] will retry after 140.657µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:16.045764 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:16.154935 748668 retry.go:31] will retry after 208.043µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:16.156136 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:16.264268 748668 retry.go:31] will retry after 400.553µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:16.265444 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:16.375833 748668 retry.go:31] will retry after 286.353µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:16.376978 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:16.485151 748668 retry.go:31] will retry after 498.544µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:16.486327 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:16.593213 748668 retry.go:31] will retry after 679.985µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:16.594403 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:16.702867 748668 retry.go:31] will retry after 1.368432ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:16.705144 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:16.811342 748668 retry.go:31] will retry after 2.601877ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:16.814579 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:16.928305 748668 retry.go:31] will retry after 5.05007ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:16.933503 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:17.043245 748668 retry.go:31] will retry after 4.118802ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:17.048428 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:17.154194 748668 retry.go:31] will retry after 7.617463ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:17.162470 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:17.271387 748668 retry.go:31] will retry after 10.613995ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:17.282579 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:17.387150 748668 retry.go:31] will retry after 18.856469ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:17.406390 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:17.515419 748668 retry.go:31] will retry after 22.859037ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:17.538618 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:17.644889 748668 retry.go:31] will retry after 34.729413ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:17.680138 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:17.786490 748668 retry.go:31] will retry after 77.447024ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:17.864793 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:17.971180 748668 retry.go:31] will retry after 70.796181ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:18.042414 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:18.150344 748668 retry.go:31] will retry after 103.923319ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:18.254622 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:18.366998 748668 retry.go:31] will retry after 190.841051ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:18.558249 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:18.662920 748668 retry.go:31] will retry after 356.026016ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:19.019123 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:19.135922 748668 retry.go:31] will retry after 679.594431ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:19.815698 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:19.933200 748668 retry.go:31] will retry after 593.393847ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:20.526868 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:20.638173 748668 retry.go:31] will retry after 894.544307ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:22.068915 758020 ssh_runner.go:195] Run: sudo crictl version
I0202 23:02:22.130903 758020 start.go:471] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.4.3
RuntimeApiVersion: v1alpha2
I0202 23:02:22.130974 758020 ssh_runner.go:195] Run: containerd --version
I0202 23:02:22.231800 758020 ssh_runner.go:195] Run: containerd --version
I0202 23:02:22.330943 758020 out.go:176] * Preparing Kubernetes v1.20.0 on containerd 1.4.3 ...
I0202 23:02:22.331033 758020 cli_runner.go:133] Run: docker network inspect running-upgrade-20220202230105-591014 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0202 23:02:22.372172 758020 ssh_runner.go:195] Run: grep 192.168.59.1 host.minikube.internal$ /etc/hosts
I0202 23:02:18.809524 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:21.309409 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:19.056611 754752 out.go:176] - kubelet.cni-conf-dir=/etc/cni/net.mk
I0202 23:02:19.056733 754752 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime containerd
I0202 23:02:19.056806 754752 ssh_runner.go:195] Run: sudo crictl images --output json
I0202 23:02:19.090663 754752 containerd.go:612] all images are preloaded for containerd runtime.
I0202 23:02:19.090679 754752 containerd.go:526] Images already preloaded, skipping extraction
I0202 23:02:19.090731 754752 ssh_runner.go:195] Run: sudo crictl images --output json
I0202 23:02:19.125871 754752 containerd.go:612] all images are preloaded for containerd runtime.
I0202 23:02:19.125888 754752 cache_images.go:84] Images are preloaded, skipping loading
I0202 23:02:19.125963 754752 ssh_runner.go:195] Run: sudo crictl info
I0202 23:02:19.157377 754752 cni.go:93] Creating CNI manager for ""
I0202 23:02:19.157392 754752 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
I0202 23:02:19.157404 754752 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0202 23:02:19.157415 754752 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220202230153-591014 NodeName:pause-20220202230153-591014 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/v
ar/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0202 23:02:19.157531 754752 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.58.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "pause-20220202230153-591014"
kubeletExtraArgs:
node-ip: 192.168.58.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.23.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0202 23:02:19.157610 754752 kubeadm.go:931] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.23.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=pause-20220202230153-591014 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.23.2 ClusterName:pause-20220202230153-591014 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0202 23:02:19.157652 754752 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2
I0202 23:02:19.167028 754752 binaries.go:44] Found k8s binaries, skipping transfer
I0202 23:02:19.167089 754752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0202 23:02:19.176065 754752 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (599 bytes)
I0202 23:02:19.191852 754752 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0202 23:02:19.208119 754752 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2056 bytes)
I0202 23:02:19.224845 754752 ssh_runner.go:195] Run: grep 192.168.58.2 control-plane.minikube.internal$ /etc/hosts
I0202 23:02:19.228862 754752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0202 23:02:19.240319 754752 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014 for IP: 192.168.58.2
I0202 23:02:19.240443 754752 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.key
I0202 23:02:19.240560 754752 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.key
I0202 23:02:19.240635 754752 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/client.key
I0202 23:02:19.240647 754752 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/client.crt with IP's: []
I0202 23:02:19.404078 754752 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/client.crt ...
I0202 23:02:19.404100 754752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/client.crt: {Name:mk52183aec1ca6a5c89267d93eb333578918a3d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0202 23:02:19.404316 754752 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/client.key ...
I0202 23:02:19.404330 754752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/client.key: {Name:mk0d6c67aaed64ea19fe26f594f795d040d24221 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0202 23:02:19.404509 754752 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/apiserver.key.cee25041
I0202 23:02:19.404526 754752 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0202 23:02:19.792016 754752 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/apiserver.crt.cee25041 ...
I0202 23:02:19.792035 754752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/apiserver.crt.cee25041: {Name:mk58bac3c6442dfa0e21d84178178a7a50b3a4fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0202 23:02:19.792248 754752 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/apiserver.key.cee25041 ...
I0202 23:02:19.792256 754752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/apiserver.key.cee25041: {Name:mkc4397525e69973ce07bd0837b4ab346e6795cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0202 23:02:19.792343 754752 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/apiserver.crt
I0202 23:02:19.792392 754752 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/apiserver.key
I0202 23:02:19.792444 754752 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/proxy-client.key
I0202 23:02:19.792457 754752 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/proxy-client.crt with IP's: []
I0202 23:02:19.960002 754752 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/proxy-client.crt ...
I0202 23:02:19.960031 754752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/proxy-client.crt: {Name:mkec38b4e82dcb441e25826ac6f0091b78804981 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0202 23:02:19.960267 754752 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/proxy-client.key ...
I0202 23:02:19.960277 754752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/proxy-client.key: {Name:mk6fe15b740d6142614a8e226dedfcd50603c0db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0202 23:02:19.960492 754752 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/591014.pem (1338 bytes)
W0202 23:02:19.960539 754752 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/591014_empty.pem, impossibly tiny 0 bytes
I0202 23:02:19.960547 754752 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem (1675 bytes)
I0202 23:02:19.960570 754752 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem (1078 bytes)
I0202 23:02:19.960600 754752 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem (1123 bytes)
I0202 23:02:19.960627 754752 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem (1675 bytes)
I0202 23:02:19.960676 754752 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/5910142.pem (1708 bytes)
I0202 23:02:19.961613 754752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0202 23:02:19.987164 754752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0202 23:02:20.009940 754752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0202 23:02:20.033246 754752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/pause-20220202230153-591014/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0202 23:02:20.056176 754752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0202 23:02:20.080009 754752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0202 23:02:20.100509 754752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0202 23:02:20.125193 754752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0202 23:02:20.148340 754752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/591014.pem --> /usr/share/ca-certificates/591014.pem (1338 bytes)
I0202 23:02:20.169465 754752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/5910142.pem --> /usr/share/ca-certificates/5910142.pem (1708 bytes)
I0202 23:02:20.191085 754752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0202 23:02:20.211905 754752 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0202 23:02:20.230042 754752 ssh_runner.go:195] Run: openssl version
I0202 23:02:20.236542 754752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/591014.pem && ln -fs /usr/share/ca-certificates/591014.pem /etc/ssl/certs/591014.pem"
I0202 23:02:20.245371 754752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/591014.pem
I0202 23:02:20.249150 754752 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 2 21:50 /usr/share/ca-certificates/591014.pem
I0202 23:02:20.249207 754752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/591014.pem
I0202 23:02:20.255523 754752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/591014.pem /etc/ssl/certs/51391683.0"
I0202 23:02:20.264329 754752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5910142.pem && ln -fs /usr/share/ca-certificates/5910142.pem /etc/ssl/certs/5910142.pem"
I0202 23:02:20.274168 754752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5910142.pem
I0202 23:02:20.278228 754752 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 2 21:50 /usr/share/ca-certificates/5910142.pem
I0202 23:02:20.278284 754752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5910142.pem
I0202 23:02:20.283509 754752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5910142.pem /etc/ssl/certs/3ec20f2e.0"
I0202 23:02:20.291704 754752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0202 23:02:20.300231 754752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0202 23:02:20.304400 754752 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 2 21:42 /usr/share/ca-certificates/minikubeCA.pem
I0202 23:02:20.304495 754752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0202 23:02:20.311783 754752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0202 23:02:20.323592 754752 kubeadm.go:390] StartCluster: {Name:pause-20220202230153-591014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:pause-20220202230153-591014 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
I0202 23:02:20.323706 754752 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0202 23:02:20.323748 754752 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0202 23:02:20.360716 754752 cri.go:87] found id: ""
I0202 23:02:20.360774 754752 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0202 23:02:20.370174 754752 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0202 23:02:20.380327 754752 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0202 23:02:20.380381 754752 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0202 23:02:20.389027 754752 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0202 23:02:20.389087 754752 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0202 23:02:22.410517 758020 out.go:176] - kubelet.cni-conf-dir=/etc/cni/net.mk
I0202 23:02:22.410619 758020 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0202 23:02:22.410693 758020 ssh_runner.go:195] Run: sudo crictl images --output json
I0202 23:02:22.429737 758020 containerd.go:608] couldn't find preloaded image for "gcr.io/k8s-minikube/storage-provisioner:v5". assuming images are not preloaded.
I0202 23:02:22.429806 758020 ssh_runner.go:195] Run: which lz4
I0202 23:02:22.434158 758020 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0202 23:02:22.438850 758020 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I0202 23:02:22.438892 758020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.20.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (582465074 bytes)
I0202 23:02:21.533660 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:21.649194 748668 retry.go:31] will retry after 2.108593507s: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:23.759443 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:23.873390 748668 retry.go:31] will retry after 1.784202082s: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:25.658567 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:23.310006 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:25.809619 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:27.810529 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:25.926032 758020 containerd.go:555] Took 3.491912 seconds to copy over tarball
I0202 23:02:25.926130 758020 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0202 23:02:27.137001 748668 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.478386655s)
I0202 23:02:27.137070 748668 retry.go:31] will retry after 5.171440736s: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:30.310097 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:32.310780 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:33.746834 758020 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (7.82067495s)
I0202 23:02:33.746933 758020 kubeadm.go:892] preload failed, will try to load cached images: extracting tarball:
** stderr **
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/28/fs/var/log/apt: Cannot mknod: File exists
tar: Exiting with failure status due to previous errors
** /stderr **: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
stdout:
stderr:
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/28/fs/var/log/apt: Cannot mknod: File exists
tar: Exiting with failure status due to previous errors
I0202 23:02:33.747004 758020 ssh_runner.go:195] Run: sudo crictl images --output json
I0202 23:02:33.768594 758020 containerd.go:608] couldn't find preloaded image for "gcr.io/k8s-minikube/storage-provisioner:v5". assuming images are not preloaded.
I0202 23:02:33.768627 758020 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.20.0 k8s.gcr.io/kube-controller-manager:v1.20.0 k8s.gcr.io/kube-scheduler:v1.20.0 k8s.gcr.io/kube-proxy:v1.20.0 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.3.1 docker.io/kubernetesui/metrics-scraper:v1.0.7]
I0202 23:02:33.768738 758020 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7
I0202 23:02:33.768792 758020 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1
I0202 23:02:33.768804 758020 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.20.0
I0202 23:02:33.768856 758020 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.20.0
I0202 23:02:33.768903 758020 image.go:134] retrieving image: k8s.gcr.io/coredns:1.7.0
I0202 23:02:33.768766 758020 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0202 23:02:33.768799 758020 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.20.0
I0202 23:02:33.768739 758020 image.go:134] retrieving image: k8s.gcr.io/pause:3.2
I0202 23:02:33.769279 758020 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.20.0
I0202 23:02:33.769463 758020 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.13-0
I0202 23:02:33.770107 758020 image.go:180] daemon lookup for docker.io/kubernetesui/dashboard:v2.3.1: Error response from daemon: reference does not exist
I0202 23:02:33.770114 758020 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.20.0: Error response from daemon: reference does not exist
I0202 23:02:33.770106 758020 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: reference does not exist
I0202 23:02:33.770279 758020 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.7.0: Error response from daemon: reference does not exist
I0202 23:02:33.770298 758020 image.go:180] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: Error response from daemon: reference does not exist
I0202 23:02:33.770531 758020 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.20.0: Error response from daemon: reference does not exist
I0202 23:02:33.770740 758020 image.go:180] daemon lookup for k8s.gcr.io/pause:3.2: Error response from daemon: reference does not exist
I0202 23:02:33.770767 758020 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.20.0: Error response from daemon: reference does not exist
I0202 23:02:33.770834 758020 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.4.13-0: Error response from daemon: reference does not exist
I0202 23:02:33.771072 758020 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.20.0: Error response from daemon: reference does not exist
I0202 23:02:32.308977 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0202 23:02:32.466081 748668 retry.go:31] will retry after 6.799168904s: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:34.810139 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:36.813347 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:38.411049 754752 out.go:203] - Generating certificates and keys ...
I0202 23:02:38.414882 754752 out.go:203] - Booting up control plane ...
I0202 23:02:38.418356 754752 out.go:203] - Configuring RBAC rules ...
I0202 23:02:38.420730 754752 cni.go:93] Creating CNI manager for ""
I0202 23:02:38.420747 754752 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
I0202 23:02:33.968138 758020 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/dashboard:v2.3.1"
I0202 23:02:34.016783 758020 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.2"
I0202 23:02:34.017723 758020 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.20.0"
I0202 23:02:34.018720 758020 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.20.0"
I0202 23:02:34.018977 758020 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.20.0"
I0202 23:02:34.020258 758020 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.4.13-0"
I0202 23:02:34.020350 758020 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns:1.7.0"
I0202 23:02:34.021164 758020 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.20.0"
I0202 23:02:34.078728 758020 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
I0202 23:02:34.239996 758020 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/metrics-scraper:v1.0.7"
I0202 23:02:35.622527 758020 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/dashboard:v2.3.1": (1.65434136s)
I0202 23:02:35.622664 758020 cache_images.go:116] "docker.io/kubernetesui/dashboard:v2.3.1" needs transfer: "docker.io/kubernetesui/dashboard:v2.3.1" does not exist at hash "e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570" in container runtime
I0202 23:02:35.622745 758020 cri.go:215] Removing image: docker.io/kubernetesui/dashboard:v2.3.1
I0202 23:02:35.622820 758020 ssh_runner.go:195] Run: which crictl
I0202 23:02:35.722852 758020 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.2": (1.706028332s)
I0202 23:02:35.722908 758020 cache_images.go:116] "k8s.gcr.io/pause:3.2" needs transfer: "k8s.gcr.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
I0202 23:02:35.722948 758020 cri.go:215] Removing image: k8s.gcr.io/pause:3.2
I0202 23:02:35.722990 758020 ssh_runner.go:195] Run: which crictl
I0202 23:02:35.921051 758020 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.20.0": (1.902292999s)
I0202 23:02:35.921113 758020 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.20.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
I0202 23:02:35.921143 758020 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.20.0": (1.903326718s)
I0202 23:02:35.921187 758020 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.20.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
I0202 23:02:35.921216 758020 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.20.0": (1.902219553s)
I0202 23:02:35.921224 758020 cri.go:215] Removing image: k8s.gcr.io/kube-apiserver:v1.20.0
I0202 23:02:35.921285 758020 ssh_runner.go:195] Run: which crictl
I0202 23:02:35.921152 758020 cri.go:215] Removing image: k8s.gcr.io/kube-scheduler:v1.20.0
I0202 23:02:35.921243 758020 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.20.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
I0202 23:02:35.921359 758020 ssh_runner.go:195] Run: which crictl
I0202 23:02:35.921363 758020 cri.go:215] Removing image: k8s.gcr.io/kube-proxy:v1.20.0
I0202 23:02:35.921455 758020 ssh_runner.go:195] Run: which crictl
I0202 23:02:36.011531 758020 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.20.0": (1.990327695s)
I0202 23:02:36.011561 758020 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.4.13-0": (1.991270769s)
I0202 23:02:36.011581 758020 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.20.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
I0202 23:02:36.011594 758020 cache_images.go:116] "k8s.gcr.io/etcd:3.4.13-0" needs transfer: "k8s.gcr.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
I0202 23:02:36.011621 758020 cri.go:215] Removing image: k8s.gcr.io/kube-controller-manager:v1.20.0
I0202 23:02:36.011642 758020 cri.go:215] Removing image: k8s.gcr.io/etcd:3.4.13-0
I0202 23:02:36.011685 758020 ssh_runner.go:195] Run: which crictl
I0202 23:02:36.011746 758020 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/metrics-scraper:v1.0.7": (1.771726291s)
I0202 23:02:36.011686 758020 ssh_runner.go:195] Run: which crictl
I0202 23:02:36.011700 758020 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns:1.7.0": (1.991330626s)
I0202 23:02:36.011838 758020 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi docker.io/kubernetesui/dashboard:v2.3.1
I0202 23:02:36.011852 758020 cache_images.go:116] "k8s.gcr.io/coredns:1.7.0" needs transfer: "k8s.gcr.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
I0202 23:02:36.011881 758020 cri.go:215] Removing image: k8s.gcr.io/coredns:1.7.0
I0202 23:02:36.011914 758020 ssh_runner.go:195] Run: which crictl
I0202 23:02:36.011938 758020 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.2
I0202 23:02:36.011723 758020 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5": (1.932966463s)
I0202 23:02:36.011983 758020 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.20.0
I0202 23:02:36.011987 758020 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
I0202 23:02:36.012043 758020 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.20.0
I0202 23:02:36.012066 758020 cri.go:215] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I0202 23:02:36.012092 758020 ssh_runner.go:195] Run: which crictl
I0202 23:02:36.012131 758020 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.20.0
I0202 23:02:36.011796 758020 cache_images.go:116] "docker.io/kubernetesui/metrics-scraper:v1.0.7" needs transfer: "docker.io/kubernetesui/metrics-scraper:v1.0.7" does not exist at hash "7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9" in container runtime
I0202 23:02:36.012179 758020 cri.go:215] Removing image: docker.io/kubernetesui/metrics-scraper:v1.0.7
I0202 23:02:36.012207 758020 ssh_runner.go:195] Run: which crictl
I0202 23:02:36.144827 758020 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.20.0
I0202 23:02:36.144949 758020 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/pause_3.2
I0202 23:02:36.145025 758020 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.2
I0202 23:02:36.145118 758020 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi docker.io/kubernetesui/metrics-scraper:v1.0.7
I0202 23:02:36.145212 758020 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1
I0202 23:02:36.145277 758020 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.3.1
I0202 23:02:36.145349 758020 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.4.13-0
I0202 23:02:36.145409 758020 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0
I0202 23:02:36.145453 758020 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I0202 23:02:36.145513 758020 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0
I0202 23:02:36.145544 758020 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns:1.7.0
I0202 23:02:36.145583 758020 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0
I0202 23:02:36.348141 758020 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0
I0202 23:02:36.348151 758020 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
I0202 23:02:36.348198 758020 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.2: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.2: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/pause_3.2': No such file or directory
I0202 23:02:36.348215 758020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/pause_3.2 --> /var/lib/minikube/images/pause_3.2 (301056 bytes)
I0202 23:02:36.348273 758020 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
I0202 23:02:36.348326 758020 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7
I0202 23:02:36.348354 758020 ssh_runner.go:352] existence check for /var/lib/minikube/images/dashboard_v2.3.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.3.1: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/dashboard_v2.3.1': No such file or directory
I0202 23:02:36.348380 758020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 --> /var/lib/minikube/images/dashboard_v2.3.1 (66936320 bytes)
I0202 23:02:36.348404 758020 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0
I0202 23:02:36.348384 758020 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.7
I0202 23:02:36.352093 758020 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0
I0202 23:02:36.355543 758020 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
I0202 23:02:36.355591 758020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
I0202 23:02:36.410104 758020 ssh_runner.go:352] existence check for /var/lib/minikube/images/metrics-scraper_v1.0.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.7: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.7': No such file or directory
I0202 23:02:36.410172 758020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 --> /var/lib/minikube/images/metrics-scraper_v1.0.7 (15031296 bytes)
I0202 23:02:36.434872 758020 containerd.go:292] Loading image: /var/lib/minikube/images/pause_3.2
I0202 23:02:36.434970 758020 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.2
I0202 23:02:36.752664 758020 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/pause_3.2 from cache
I0202 23:02:36.752714 758020 containerd.go:292] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I0202 23:02:36.752765 758020 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
I0202 23:02:37.824126 758020 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5: (1.071327231s)
I0202 23:02:37.824164 758020 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I0202 23:02:37.824195 758020 containerd.go:292] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.7
I0202 23:02:37.824247 758020 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/metrics-scraper_v1.0.7
I0202 23:02:39.267516 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
W0202 23:02:39.407646 748668 kubeadm.go:722] addon install failed, wil retry: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:39.407692 748668 kubeadm.go:604] restartCluster took 44.795986336s
W0202 23:02:39.407932 748668 out.go:241] ! Unable to restart cluster, will reset it: addons: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
stdout:
stderr:
error execution phase addon/coredns: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIPs: Required value
To see the stack trace of this error execute with --v=5 or higher
I0202 23:02:39.407973 748668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0202 23:02:39.316620 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:41.808866 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:38.422967 754752 out.go:176] * Configuring CNI (Container Networking Interface) ...
I0202 23:02:38.423072 754752 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0202 23:02:38.427736 754752 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.2/kubectl ...
I0202 23:02:38.427749 754752 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0202 23:02:38.448780 754752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0202 23:02:39.408917 754752 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0202 23:02:39.409046 754752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 23:02:39.409050 754752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=e7ecaa98a6d1dab5935ea4b7778c6e187f5bde82 minikube.k8s.io/name=pause-20220202230153-591014 minikube.k8s.io/updated_at=2022_02_02T23_02_39_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0202 23:02:39.432200 754752 ops.go:34] apiserver oom_adj: -16
I0202 23:02:39.516164 754752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 23:02:40.087717 754752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 23:02:40.587470 754752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 23:02:41.087127 754752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 23:02:41.587874 754752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 23:02:42.087915 754752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 23:02:42.588001 754752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 23:02:43.087756 754752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 23:02:38.919385 758020 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/metrics-scraper_v1.0.7: (1.095108675s)
I0202 23:02:38.919423 758020 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 from cache
I0202 23:02:38.919449 758020 containerd.go:292] Loading image: /var/lib/minikube/images/dashboard_v2.3.1
I0202 23:02:38.919497 758020 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/dashboard_v2.3.1
I0202 23:02:43.565564 748668 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (4.157566061s)
I0202 23:02:43.565653 748668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0202 23:02:43.578464 748668 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0202 23:02:43.586689 748668 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0202 23:02:43.586743 748668 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0202 23:02:43.594933 748668 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0202 23:02:43.594995 748668 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0202 23:02:46.040038 748668 out.go:203] - Generating certificates and keys ...
I0202 23:02:43.810202 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:46.309940 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:43.587719 754752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 23:02:44.087133 754752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 23:02:44.587903 754752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 23:02:45.087176 754752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 23:02:45.587297 754752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 23:02:46.087679 754752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 23:02:46.588038 754752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 23:02:47.088105 754752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 23:02:47.587284 754752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 23:02:48.087869 754752 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0202 23:02:46.709412 758020 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/dashboard_v2.3.1: (7.789883234s)
I0202 23:02:46.709447 758020 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 from cache
I0202 23:02:46.709508 758020 cache_images.go:92] LoadImages completed in 12.940859446s
W0202 23:02:46.709647 758020 out.go:241] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0: no such file or directory
I0202 23:02:46.709707 758020 ssh_runner.go:195] Run: sudo crictl info
I0202 23:02:46.729220 758020 cni.go:93] Creating CNI manager for ""
I0202 23:02:46.729254 758020 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
I0202 23:02:46.729272 758020 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0202 23:02:46.729291 758020 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.59.167 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-20220202230105-591014 NodeName:running-upgrade-20220202230105-591014 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.59.167"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.59.167 CgroupDrive
r:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0202 23:02:46.729468 758020 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.59.167
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "running-upgrade-20220202230105-591014"
kubeletExtraArgs:
node-ip: 192.168.59.167
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.59.167"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0202 23:02:46.729591 758020 kubeadm.go:931] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=running-upgrade-20220202230105-591014 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.59.167 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:running-upgrade-20220202230105-591014 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0202 23:02:46.729660 758020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0202 23:02:46.738831 758020 binaries.go:44] Found k8s binaries, skipping transfer
I0202 23:02:46.738917 758020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0202 23:02:46.747242 758020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (584 bytes)
I0202 23:02:46.761977 758020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0202 23:02:46.815567 758020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
I0202 23:02:46.830728 758020 ssh_runner.go:195] Run: grep 192.168.59.167 control-plane.minikube.internal$ /etc/hosts
I0202 23:02:46.834655 758020 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/running-upgrade-20220202230105-591014 for IP: 192.168.59.167
I0202 23:02:46.834794 758020 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.key
I0202 23:02:46.834846 758020 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.key
I0202 23:02:46.834938 758020 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/running-upgrade-20220202230105-591014/client.key
I0202 23:02:46.835017 758020 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/running-upgrade-20220202230105-591014/apiserver.key.2ee57db7
I0202 23:02:46.835079 758020 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/running-upgrade-20220202230105-591014/proxy-client.key
I0202 23:02:46.835287 758020 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/591014.pem (1338 bytes)
W0202 23:02:46.835396 758020 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/591014_empty.pem, impossibly tiny 0 bytes
I0202 23:02:46.835417 758020 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca-key.pem (1675 bytes)
I0202 23:02:46.835473 758020 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/ca.pem (1078 bytes)
I0202 23:02:46.835514 758020 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/cert.pem (1123 bytes)
I0202 23:02:46.835552 758020 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/key.pem (1675 bytes)
I0202 23:02:46.835628 758020 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/5910142.pem (1708 bytes)
I0202 23:02:46.836937 758020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/running-upgrade-20220202230105-591014/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0202 23:02:46.861193 758020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/running-upgrade-20220202230105-591014/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0202 23:02:46.925501 758020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/running-upgrade-20220202230105-591014/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0202 23:02:46.954304 758020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/running-upgrade-20220202230105-591014/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0202 23:02:47.014593 758020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0202 23:02:47.037114 758020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0202 23:02:47.056521 758020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0202 23:02:47.125534 758020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0202 23:02:47.154099 758020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/certs/591014.pem --> /usr/share/ca-certificates/591014.pem (1338 bytes)
I0202 23:02:47.208861 758020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/files/etc/ssl/certs/5910142.pem --> /usr/share/ca-certificates/5910142.pem (1708 bytes)
I0202 23:02:47.232027 758020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0202 23:02:47.252547 758020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0202 23:02:47.268270 758020 ssh_runner.go:195] Run: openssl version
I0202 23:02:47.307626 758020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0202 23:02:47.317774 758020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0202 23:02:47.321865 758020 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 2 21:42 /usr/share/ca-certificates/minikubeCA.pem
I0202 23:02:47.321939 758020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0202 23:02:47.328914 758020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0202 23:02:47.337826 758020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/591014.pem && ln -fs /usr/share/ca-certificates/591014.pem /etc/ssl/certs/591014.pem"
I0202 23:02:47.347634 758020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/591014.pem
I0202 23:02:47.352078 758020 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Feb 2 21:50 /usr/share/ca-certificates/591014.pem
I0202 23:02:47.352142 758020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/591014.pem
I0202 23:02:47.357864 758020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/591014.pem /etc/ssl/certs/51391683.0"
I0202 23:02:47.404276 758020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5910142.pem && ln -fs /usr/share/ca-certificates/5910142.pem /etc/ssl/certs/5910142.pem"
I0202 23:02:47.414903 758020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5910142.pem
I0202 23:02:47.419017 758020 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Feb 2 21:50 /usr/share/ca-certificates/5910142.pem
I0202 23:02:47.419078 758020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5910142.pem
I0202 23:02:47.424682 758020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5910142.pem /etc/ssl/certs/3ec20f2e.0"
I0202 23:02:47.434239 758020 kubeadm.go:390] StartCluster: {Name:running-upgrade-20220202230105-591014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:running-upgrade-20220202230105-591014 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.167 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false}
I0202 23:02:47.434353 758020 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0202 23:02:47.434422 758020 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0202 23:02:47.460220 758020 cri.go:87] found id: "72c453a4004b0c1e0e18ffb4f02b7ae8af486e8769a79f6c77e1e260606d4317"
I0202 23:02:47.460263 758020 cri.go:87] found id: "d6578bb1af331bd6cce6b0b75cd9104f91440d4df6139afc34531722482dae37"
I0202 23:02:47.460273 758020 cri.go:87] found id: "d39a729f4ce9eaf6fc3c394778479a1c95a5766d077b4167f977bb022aa440c6"
I0202 23:02:47.460280 758020 cri.go:87] found id: "08d501ac537cfaee66117ecd2db7b60cbacc38eda3b0efb9a7cdddc981be3a2e"
I0202 23:02:47.460286 758020 cri.go:87] found id: "13d9646071a20996c0aac1283bb20303af04632e6b7168d69ce671c3fde4abbd"
I0202 23:02:47.460294 758020 cri.go:87] found id: "5976bfb385e65fc47e4a4e19ade29dfa1e15f30bde157781e3f06b0df7e407d9"
I0202 23:02:47.460301 758020 cri.go:87] found id: ""
I0202 23:02:47.460345 758020 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I0202 23:02:47.537487 758020 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"08d501ac537cfaee66117ecd2db7b60cbacc38eda3b0efb9a7cdddc981be3a2e","pid":1345,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/08d501ac537cfaee66117ecd2db7b60cbacc38eda3b0efb9a7cdddc981be3a2e","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/08d501ac537cfaee66117ecd2db7b60cbacc38eda3b0efb9a7cdddc981be3a2e/rootfs","created":"2022-02-02T23:01:55.712751182Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a6619ddfa7719cfb08985f410c6849b19707d418526961884fb75a03362e8daa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"13d9646071a20996c0aac1283bb20303af04632e6b7168d69ce671c3fde4abbd","pid":1316,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/13d9646071a20996c0aac1283bb20303af04632e6b7168d69ce671c3fde4abbd","rootfs":"/run/containerd/io.containerd.
runtime.v1.linux/k8s.io/13d9646071a20996c0aac1283bb20303af04632e6b7168d69ce671c3fde4abbd/rootfs","created":"2022-02-02T23:01:55.648603001Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"dd3c07fe4d3a726a108148addd608169ed9e8662e1888a5896949fbd9fa5de38"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5183416c507382f9cbf6b06909f4c84e4f62cc6a61c37c73507a9d773d573789","pid":1169,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/5183416c507382f9cbf6b06909f4c84e4f62cc6a61c37c73507a9d773d573789","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/5183416c507382f9cbf6b06909f4c84e4f62cc6a61c37c73507a9d773d573789/rootfs","created":"2022-02-02T23:01:55.346746732Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"5183416c507382f9cbf6b06909f4c84e4f62cc6a61c37c73507a9d773d573789","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/ku
be-system_etcd-running-upgrade-20220202230105-591014_3c0bb2105f4d90982d48b59e5b239908"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5976bfb385e65fc47e4a4e19ade29dfa1e15f30bde157781e3f06b0df7e407d9","pid":1308,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/5976bfb385e65fc47e4a4e19ade29dfa1e15f30bde157781e3f06b0df7e407d9","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/5976bfb385e65fc47e4a4e19ade29dfa1e15f30bde157781e3f06b0df7e407d9/rootfs","created":"2022-02-02T23:01:55.630583579Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"5183416c507382f9cbf6b06909f4c84e4f62cc6a61c37c73507a9d773d573789"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6e64d2b93a3c436abcaed476dd7a7435df8f814eb3b8011637685eaace7e0925","pid":2174,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e64d2b93a3c436abcaed476dd7a7435df8f814eb3b8011637685eaace7e0925","rootf
s":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e64d2b93a3c436abcaed476dd7a7435df8f814eb3b8011637685eaace7e0925/rootfs","created":"2022-02-02T23:02:22.713499846Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"6e64d2b93a3c436abcaed476dd7a7435df8f814eb3b8011637685eaace7e0925","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-p28b8_0e18f59c-3463-4418-8cb4-00827040854f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"72c453a4004b0c1e0e18ffb4f02b7ae8af486e8769a79f6c77e1e260606d4317","pid":2273,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72c453a4004b0c1e0e18ffb4f02b7ae8af486e8769a79f6c77e1e260606d4317","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72c453a4004b0c1e0e18ffb4f02b7ae8af486e8769a79f6c77e1e260606d4317/rootfs","created":"2022-02-02T23:02:23.406653809Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kube
rnetes.cri.sandbox-id":"6e64d2b93a3c436abcaed476dd7a7435df8f814eb3b8011637685eaace7e0925"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"843f6971e9401d67a863359d65bed7144117efd2e7cc9495dc7eac5f11007165","pid":1250,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/843f6971e9401d67a863359d65bed7144117efd2e7cc9495dc7eac5f11007165","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/843f6971e9401d67a863359d65bed7144117efd2e7cc9495dc7eac5f11007165/rootfs","created":"2022-02-02T23:01:55.525643783Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"843f6971e9401d67a863359d65bed7144117efd2e7cc9495dc7eac5f11007165","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-running-upgrade-20220202230105-591014_a3e7be694ef7cf952503c5d331abc0ac"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8aef9c8bd3353a06e1ce6217774115af63c4192e5033c669e7fa67f902e43cc9","pid":2184,"status":"running","bundle":"/
run/containerd/io.containerd.runtime.v2.task/k8s.io/8aef9c8bd3353a06e1ce6217774115af63c4192e5033c669e7fa67f902e43cc9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8aef9c8bd3353a06e1ce6217774115af63c4192e5033c669e7fa67f902e43cc9/rootfs","created":"2022-02-02T23:02:22.638144269Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"8aef9c8bd3353a06e1ce6217774115af63c4192e5033c669e7fa67f902e43cc9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-7wbhz_4bdd6de5-e6ce-4b87-970b-b5892ea3631d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a6619ddfa7719cfb08985f410c6849b19707d418526961884fb75a03362e8daa","pid":1222,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/a6619ddfa7719cfb08985f410c6849b19707d418526961884fb75a03362e8daa","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/a6619ddfa7719cfb08985f410c6849b19707d418526961884fb75a03362e8daa/rootfs","created":"2022-02-02T23:01:55.41
7859982Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a6619ddfa7719cfb08985f410c6849b19707d418526961884fb75a03362e8daa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-running-upgrade-20220202230105-591014_ae350b83bf2b9a0530c0e72155119f7b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d39a729f4ce9eaf6fc3c394778479a1c95a5766d077b4167f977bb022aa440c6","pid":1412,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/d39a729f4ce9eaf6fc3c394778479a1c95a5766d077b4167f977bb022aa440c6","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/d39a729f4ce9eaf6fc3c394778479a1c95a5766d077b4167f977bb022aa440c6/rootfs","created":"2022-02-02T23:01:55.822194008Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"843f6971e9401d67a863359d65bed7144117efd2e7cc9495dc7eac5f11007165"},"owner":"root"},{"ociVe
rsion":"1.0.2-dev","id":"d6578bb1af331bd6cce6b0b75cd9104f91440d4df6139afc34531722482dae37","pid":2226,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6578bb1af331bd6cce6b0b75cd9104f91440d4df6139afc34531722482dae37","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6578bb1af331bd6cce6b0b75cd9104f91440d4df6139afc34531722482dae37/rootfs","created":"2022-02-02T23:02:23.026034072Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"8aef9c8bd3353a06e1ce6217774115af63c4192e5033c669e7fa67f902e43cc9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dd3c07fe4d3a726a108148addd608169ed9e8662e1888a5896949fbd9fa5de38","pid":1170,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/dd3c07fe4d3a726a108148addd608169ed9e8662e1888a5896949fbd9fa5de38","rootfs":"/run/containerd/io.containerd.runtime.v1.linux/k8s.io/dd3c07fe4d3a726a108148addd608169ed9e8662e1888a58
96949fbd9fa5de38/rootfs","created":"2022-02-02T23:01:55.344181562Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"dd3c07fe4d3a726a108148addd608169ed9e8662e1888a5896949fbd9fa5de38","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-running-upgrade-20220202230105-591014_3478da2c440ba32fb6c087b3f3b99813"},"owner":"root"}]
I0202 23:02:47.537696 758020 cri.go:124] list returned 12 containers
I0202 23:02:47.537714 758020 cri.go:127] container: {ID:08d501ac537cfaee66117ecd2db7b60cbacc38eda3b0efb9a7cdddc981be3a2e Status:running}
I0202 23:02:47.537730 758020 cri.go:133] skipping {08d501ac537cfaee66117ecd2db7b60cbacc38eda3b0efb9a7cdddc981be3a2e running}: state = "running", want "paused"
I0202 23:02:47.537746 758020 cri.go:127] container: {ID:13d9646071a20996c0aac1283bb20303af04632e6b7168d69ce671c3fde4abbd Status:running}
I0202 23:02:47.537757 758020 cri.go:133] skipping {13d9646071a20996c0aac1283bb20303af04632e6b7168d69ce671c3fde4abbd running}: state = "running", want "paused"
I0202 23:02:47.537763 758020 cri.go:127] container: {ID:5183416c507382f9cbf6b06909f4c84e4f62cc6a61c37c73507a9d773d573789 Status:running}
I0202 23:02:47.537774 758020 cri.go:129] skipping 5183416c507382f9cbf6b06909f4c84e4f62cc6a61c37c73507a9d773d573789 - not in ps
I0202 23:02:47.537780 758020 cri.go:127] container: {ID:5976bfb385e65fc47e4a4e19ade29dfa1e15f30bde157781e3f06b0df7e407d9 Status:running}
I0202 23:02:47.537816 758020 cri.go:133] skipping {5976bfb385e65fc47e4a4e19ade29dfa1e15f30bde157781e3f06b0df7e407d9 running}: state = "running", want "paused"
I0202 23:02:47.537831 758020 cri.go:127] container: {ID:6e64d2b93a3c436abcaed476dd7a7435df8f814eb3b8011637685eaace7e0925 Status:running}
I0202 23:02:47.537837 758020 cri.go:129] skipping 6e64d2b93a3c436abcaed476dd7a7435df8f814eb3b8011637685eaace7e0925 - not in ps
I0202 23:02:47.537843 758020 cri.go:127] container: {ID:72c453a4004b0c1e0e18ffb4f02b7ae8af486e8769a79f6c77e1e260606d4317 Status:running}
I0202 23:02:47.537850 758020 cri.go:133] skipping {72c453a4004b0c1e0e18ffb4f02b7ae8af486e8769a79f6c77e1e260606d4317 running}: state = "running", want "paused"
I0202 23:02:47.537856 758020 cri.go:127] container: {ID:843f6971e9401d67a863359d65bed7144117efd2e7cc9495dc7eac5f11007165 Status:running}
I0202 23:02:47.537869 758020 cri.go:129] skipping 843f6971e9401d67a863359d65bed7144117efd2e7cc9495dc7eac5f11007165 - not in ps
I0202 23:02:47.537873 758020 cri.go:127] container: {ID:8aef9c8bd3353a06e1ce6217774115af63c4192e5033c669e7fa67f902e43cc9 Status:running}
I0202 23:02:47.537879 758020 cri.go:129] skipping 8aef9c8bd3353a06e1ce6217774115af63c4192e5033c669e7fa67f902e43cc9 - not in ps
I0202 23:02:47.537890 758020 cri.go:127] container: {ID:a6619ddfa7719cfb08985f410c6849b19707d418526961884fb75a03362e8daa Status:running}
I0202 23:02:47.537895 758020 cri.go:129] skipping a6619ddfa7719cfb08985f410c6849b19707d418526961884fb75a03362e8daa - not in ps
I0202 23:02:47.537907 758020 cri.go:127] container: {ID:d39a729f4ce9eaf6fc3c394778479a1c95a5766d077b4167f977bb022aa440c6 Status:running}
I0202 23:02:47.537913 758020 cri.go:133] skipping {d39a729f4ce9eaf6fc3c394778479a1c95a5766d077b4167f977bb022aa440c6 running}: state = "running", want "paused"
I0202 23:02:47.537919 758020 cri.go:127] container: {ID:d6578bb1af331bd6cce6b0b75cd9104f91440d4df6139afc34531722482dae37 Status:running}
I0202 23:02:47.537925 758020 cri.go:133] skipping {d6578bb1af331bd6cce6b0b75cd9104f91440d4df6139afc34531722482dae37 running}: state = "running", want "paused"
I0202 23:02:47.537931 758020 cri.go:127] container: {ID:dd3c07fe4d3a726a108148addd608169ed9e8662e1888a5896949fbd9fa5de38 Status:running}
I0202 23:02:47.537937 758020 cri.go:129] skipping dd3c07fe4d3a726a108148addd608169ed9e8662e1888a5896949fbd9fa5de38 - not in ps
I0202 23:02:47.537985 758020 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0202 23:02:47.547197 758020 kubeadm.go:401] found existing configuration files, will attempt cluster restart
I0202 23:02:47.547222 758020 kubeadm.go:600] restartCluster start
I0202 23:02:47.547261 758020 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0202 23:02:47.555498 758020 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0202 23:02:47.556492 758020 kubeconfig.go:116] verify returned: extract IP: "running-upgrade-20220202230105-591014" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig
I0202 23:02:47.556882 758020 kubeconfig.go:127] "running-upgrade-20220202230105-591014" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig - will repair!
I0202 23:02:47.557525 758020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/kubeconfig: {Name:mk517363bda8f9dbd36a7a8d18db65eef4735455 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0202 23:02:47.559072 758020 kapi.go:59] client config for running-upgrade-20220202230105-591014: &rest.Config{Host:"https://192.168.59.167:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/profiles/running-upgrade-20220202230105-591014/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/prof
iles/running-upgrade-20220202230105-591014/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13251-587669-cce8d1911280cbcb62c9a9805b43d62c56136aef/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x15dae40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0202 23:02:47.561451 758020 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0202 23:02:47.619068 758020 kubeadm.go:568] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml 2022-02-02 23:01:48.965628627 +0000
+++ /var/tmp/minikube/kubeadm.yaml.new 2022-02-02 23:02:46.825862623 +0000
@@ -65,4 +65,10 @@
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
-metricsBindAddress: 192.168.59.167:10249
+metricsBindAddress: 0.0.0.0:10249
+conntrack:
+ maxPerCore: 0
+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
+ tcpEstablishedTimeout: 0s
+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
+ tcpCloseWaitTimeout: 0s
-- /stdout --
I0202 23:02:47.619101 758020 kubeadm.go:1054] stopping kube-system containers ...
I0202 23:02:47.619118 758020 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0202 23:02:47.619181 758020 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0202 23:02:47.643794 758020 cri.go:87] found id: "72c453a4004b0c1e0e18ffb4f02b7ae8af486e8769a79f6c77e1e260606d4317"
I0202 23:02:47.643826 758020 cri.go:87] found id: "d6578bb1af331bd6cce6b0b75cd9104f91440d4df6139afc34531722482dae37"
I0202 23:02:47.643835 758020 cri.go:87] found id: "d39a729f4ce9eaf6fc3c394778479a1c95a5766d077b4167f977bb022aa440c6"
I0202 23:02:47.643842 758020 cri.go:87] found id: "08d501ac537cfaee66117ecd2db7b60cbacc38eda3b0efb9a7cdddc981be3a2e"
I0202 23:02:47.643848 758020 cri.go:87] found id: "13d9646071a20996c0aac1283bb20303af04632e6b7168d69ce671c3fde4abbd"
I0202 23:02:47.643857 758020 cri.go:87] found id: "5976bfb385e65fc47e4a4e19ade29dfa1e15f30bde157781e3f06b0df7e407d9"
I0202 23:02:47.643863 758020 cri.go:87] found id: ""
I0202 23:02:47.643870 758020 cri.go:231] Stopping containers: [72c453a4004b0c1e0e18ffb4f02b7ae8af486e8769a79f6c77e1e260606d4317 d6578bb1af331bd6cce6b0b75cd9104f91440d4df6139afc34531722482dae37 d39a729f4ce9eaf6fc3c394778479a1c95a5766d077b4167f977bb022aa440c6 08d501ac537cfaee66117ecd2db7b60cbacc38eda3b0efb9a7cdddc981be3a2e 13d9646071a20996c0aac1283bb20303af04632e6b7168d69ce671c3fde4abbd 5976bfb385e65fc47e4a4e19ade29dfa1e15f30bde157781e3f06b0df7e407d9]
I0202 23:02:47.643936 758020 ssh_runner.go:195] Run: which crictl
I0202 23:02:47.647834 758020 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 72c453a4004b0c1e0e18ffb4f02b7ae8af486e8769a79f6c77e1e260606d4317 d6578bb1af331bd6cce6b0b75cd9104f91440d4df6139afc34531722482dae37 d39a729f4ce9eaf6fc3c394778479a1c95a5766d077b4167f977bb022aa440c6 08d501ac537cfaee66117ecd2db7b60cbacc38eda3b0efb9a7cdddc981be3a2e 13d9646071a20996c0aac1283bb20303af04632e6b7168d69ce671c3fde4abbd 5976bfb385e65fc47e4a4e19ade29dfa1e15f30bde157781e3f06b0df7e407d9
I0202 23:02:48.808980 708547 node_ready.go:58] node "offline-containerd-20220202225402-591014" has status "Ready":"False"
I0202 23:02:49.812870 708547 node_ready.go:38] duration metric: took 4m0.01198768s waiting for node "offline-containerd-20220202225402-591014" to be "Ready" ...
I0202 23:02:49.816319 708547 out.go:176]
W0202 23:02:49.816567 708547 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
W0202 23:02:49.816584 708547 out.go:241] *
W0202 23:02:49.817580 708547 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
6f4dbdec5bb7d 6de166512aa22 About a minute ago Running kindnet-cni 1 f47cceddf72a0
9fd761cf09544 d922ca3da64b3 3 minutes ago Running kube-proxy 0 990e4f0d333fe
ed251c3b57729 6de166512aa22 3 minutes ago Exited kindnet-cni 0 f47cceddf72a0
2c0abeaf6dcea 4783639ba7e03 4 minutes ago Running kube-controller-manager 0 49b677fc29f90
e0e28a3753369 8a0228dd6a683 4 minutes ago Running kube-apiserver 0 e217f81197972
0f277f8150b5d 25f8c7f3da61c 4 minutes ago Running etcd 0 1a8b2d2143095
317793a76681b 6114d758d6d16 4 minutes ago Running kube-scheduler 0 ec185acf89782
*
* ==> containerd <==
* -- Logs begin at Wed 2022-02-02 22:58:08 UTC, end at Wed 2022-02-02 23:02:51 UTC. --
Feb 02 22:58:48 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T22:58:48.707751342Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-cmk6t,Uid:dc0875a4-4f07-4418-bbef-20fb13d92973,Namespace:kube-system,Attempt:0,}"
Feb 02 22:58:48 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T22:58:48.707754516Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-jp747,Uid:b6e360b7-3edd-4e60-9e49-0e360da6440d,Namespace:kube-system,Attempt:0,}"
Feb 02 22:58:48 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T22:58:48.740210062Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f47cceddf72a0a0e3178d1e9ed0105db029310e2c11883f354c5c19426b9fdfd pid=1882
Feb 02 22:58:48 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T22:58:48.742285822Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/990e4f0d333fe2de412092fc8eecd7f620a6299ec41ff9349c16ac063083b8f7 pid=1892
Feb 02 22:58:48 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T22:58:48.814854108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jp747,Uid:b6e360b7-3edd-4e60-9e49-0e360da6440d,Namespace:kube-system,Attempt:0,} returns sandbox id \"990e4f0d333fe2de412092fc8eecd7f620a6299ec41ff9349c16ac063083b8f7\""
Feb 02 22:58:48 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T22:58:48.817873459Z" level=info msg="CreateContainer within sandbox \"990e4f0d333fe2de412092fc8eecd7f620a6299ec41ff9349c16ac063083b8f7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb 02 22:58:49 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T22:58:49.105304641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-cmk6t,Uid:dc0875a4-4f07-4418-bbef-20fb13d92973,Namespace:kube-system,Attempt:0,} returns sandbox id \"f47cceddf72a0a0e3178d1e9ed0105db029310e2c11883f354c5c19426b9fdfd\""
Feb 02 22:58:49 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T22:58:49.107998302Z" level=info msg="CreateContainer within sandbox \"f47cceddf72a0a0e3178d1e9ed0105db029310e2c11883f354c5c19426b9fdfd\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
Feb 02 22:58:54 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T22:58:54.490979803Z" level=info msg="CreateContainer within sandbox \"f47cceddf72a0a0e3178d1e9ed0105db029310e2c11883f354c5c19426b9fdfd\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"ed251c3b5772996e4ce9060d71b240c7a57c1d847dd41a2d4e27a5f6277fe65c\""
Feb 02 22:58:54 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T22:58:54.491858584Z" level=info msg="StartContainer for \"ed251c3b5772996e4ce9060d71b240c7a57c1d847dd41a2d4e27a5f6277fe65c\""
Feb 02 22:58:54 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T22:58:54.497174459Z" level=info msg="CreateContainer within sandbox \"990e4f0d333fe2de412092fc8eecd7f620a6299ec41ff9349c16ac063083b8f7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9fd761cf0954478fa805325c66b97f2c906b3cfb9df510895e607c58fc7721cd\""
Feb 02 22:58:54 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T22:58:54.497793547Z" level=info msg="StartContainer for \"9fd761cf0954478fa805325c66b97f2c906b3cfb9df510895e607c58fc7721cd\""
Feb 02 22:58:54 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T22:58:54.633199674Z" level=info msg="StartContainer for \"9fd761cf0954478fa805325c66b97f2c906b3cfb9df510895e607c58fc7721cd\" returns successfully"
Feb 02 22:58:54 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T22:58:54.707452857Z" level=info msg="StartContainer for \"ed251c3b5772996e4ce9060d71b240c7a57c1d847dd41a2d4e27a5f6277fe65c\" returns successfully"
Feb 02 23:01:34 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T23:01:34.932578949Z" level=info msg="Finish piping stderr of container \"ed251c3b5772996e4ce9060d71b240c7a57c1d847dd41a2d4e27a5f6277fe65c\""
Feb 02 23:01:34 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T23:01:34.932687496Z" level=info msg="Finish piping stdout of container \"ed251c3b5772996e4ce9060d71b240c7a57c1d847dd41a2d4e27a5f6277fe65c\""
Feb 02 23:01:34 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T23:01:34.934355461Z" level=info msg="TaskExit event &TaskExit{ContainerID:ed251c3b5772996e4ce9060d71b240c7a57c1d847dd41a2d4e27a5f6277fe65c,ID:ed251c3b5772996e4ce9060d71b240c7a57c1d847dd41a2d4e27a5f6277fe65c,Pid:2068,ExitStatus:2,ExitedAt:2022-02-02 23:01:34.933991062 +0000 UTC,XXX_unrecognized:[],}"
Feb 02 23:01:35 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T23:01:35.539709654Z" level=error msg="collecting metrics for ed251c3b5772996e4ce9060d71b240c7a57c1d847dd41a2d4e27a5f6277fe65c" error="cgroups: cgroup deleted: unknown"
Feb 02 23:01:44 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T23:01:44.934971227Z" level=error msg="Failed to handle exit event &TaskExit{ContainerID:ed251c3b5772996e4ce9060d71b240c7a57c1d847dd41a2d4e27a5f6277fe65c,ID:ed251c3b5772996e4ce9060d71b240c7a57c1d847dd41a2d4e27a5f6277fe65c,Pid:2068,ExitStatus:2,ExitedAt:2022-02-02 23:01:34.933991062 +0000 UTC,XXX_unrecognized:[],} for ed251c3b5772996e4ce9060d71b240c7a57c1d847dd41a2d4e27a5f6277fe65c" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded: unknown"
Feb 02 23:01:45 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T23:01:45.555290243Z" level=error msg="collecting metrics for ed251c3b5772996e4ce9060d71b240c7a57c1d847dd41a2d4e27a5f6277fe65c" error="cgroups: cgroup deleted: unknown"
Feb 02 23:01:46 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T23:01:46.413121149Z" level=info msg="TaskExit event &TaskExit{ContainerID:ed251c3b5772996e4ce9060d71b240c7a57c1d847dd41a2d4e27a5f6277fe65c,ID:ed251c3b5772996e4ce9060d71b240c7a57c1d847dd41a2d4e27a5f6277fe65c,Pid:2068,ExitStatus:2,ExitedAt:2022-02-02 23:01:34.933991062 +0000 UTC,XXX_unrecognized:[],}"
Feb 02 23:01:47 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T23:01:47.322609258Z" level=info msg="CreateContainer within sandbox \"f47cceddf72a0a0e3178d1e9ed0105db029310e2c11883f354c5c19426b9fdfd\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
Feb 02 23:01:47 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T23:01:47.463877000Z" level=info msg="CreateContainer within sandbox \"f47cceddf72a0a0e3178d1e9ed0105db029310e2c11883f354c5c19426b9fdfd\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"6f4dbdec5bb7de146ef06cca75f0467bfbd6b3f00ce09bb2aa82efb89abffa8a\""
Feb 02 23:01:47 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T23:01:47.464610181Z" level=info msg="StartContainer for \"6f4dbdec5bb7de146ef06cca75f0467bfbd6b3f00ce09bb2aa82efb89abffa8a\""
Feb 02 23:01:47 offline-containerd-20220202225402-591014 containerd[510]: time="2022-02-02T23:01:47.644747643Z" level=info msg="StartContainer for \"6f4dbdec5bb7de146ef06cca75f0467bfbd6b3f00ce09bb2aa82efb89abffa8a\" returns successfully"
*
* ==> describe nodes <==
* Name: offline-containerd-20220202225402-591014
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=offline-containerd-20220202225402-591014
kubernetes.io/os=linux
minikube.k8s.io/commit=e7ecaa98a6d1dab5935ea4b7778c6e187f5bde82
minikube.k8s.io/name=offline-containerd-20220202225402-591014
minikube.k8s.io/updated_at=2022_02_02T22_58_40_0700
minikube.k8s.io/version=v1.25.1
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 02 Feb 2022 22:58:31 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: offline-containerd-20220202225402-591014
AcquireTime: <unset>
RenewTime: Wed, 02 Feb 2022 23:02:49 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 02 Feb 2022 22:58:35 +0000 Wed, 02 Feb 2022 22:58:28 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 02 Feb 2022 22:58:35 +0000 Wed, 02 Feb 2022 22:58:28 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 02 Feb 2022 22:58:35 +0000 Wed, 02 Feb 2022 22:58:28 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Wed, 02 Feb 2022 22:58:35 +0000 Wed, 02 Feb 2022 22:58:28 +0000 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Addresses:
InternalIP: 192.168.67.2
Hostname: offline-containerd-20220202225402-591014
Capacity:
cpu: 8
ephemeral-storage: 304695084Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32874648Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304695084Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32874648Ki
pods: 110
System Info:
Machine ID: 8de776e053e140d6a14c2d2def3d6bb8
System UUID: 5071923c-3918-4df5-b260-1ecde9665314
Boot ID: e0fbe7c1-39b3-46a3-b281-95db0294991c
Kernel Version: 5.11.0-1029-gcp
OS Image: Ubuntu 20.04.2 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.4.12
Kubelet Version: v1.23.2
Kube-Proxy Version: v1.23.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system etcd-offline-containerd-20220202225402-591014 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 4m19s
kube-system kindnet-cmk6t 100m (1%!)(MISSING) 100m (1%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 4m3s
kube-system kube-apiserver-offline-containerd-20220202225402-591014 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m16s
kube-system kube-controller-manager-offline-containerd-20220202225402-591014 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m16s
kube-system kube-proxy-jp747 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m3s
kube-system kube-scheduler-offline-containerd-20220202225402-591014 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m16s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (9%!)(MISSING) 100m (1%!)(MISSING)
memory 150Mi (0%!)(MISSING) 50Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 3m56s kube-proxy
Normal NodeHasSufficientMemory 4m25s (x5 over 4m25s) kubelet Node offline-containerd-20220202225402-591014 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m25s (x5 over 4m25s) kubelet Node offline-containerd-20220202225402-591014 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m25s (x4 over 4m25s) kubelet Node offline-containerd-20220202225402-591014 status is now: NodeHasSufficientPID
Normal Starting 4m17s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m17s kubelet Node offline-containerd-20220202225402-591014 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m17s kubelet Node offline-containerd-20220202225402-591014 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m17s kubelet Node offline-containerd-20220202225402-591014 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m16s kubelet Updated Node Allocatable limit across pods
*
* ==> dmesg <==
* [ +5.003639] IPv4: martian source 10.244.0.179 from 10.244.1.2, on dev br-8a4282f8e01f
[ +0.000005] ll header: 00000000: 02 42 c9 a7 cc 27 02 42 c0 a8 31 02 08 00
[ +5.003651] IPv4: martian source 10.244.0.179 from 10.244.1.2, on dev br-8a4282f8e01f
[ +0.000005] ll header: 00000000: 02 42 c9 a7 cc 27 02 42 c0 a8 31 02 08 00
[ +5.003907] IPv4: martian source 10.244.0.179 from 10.244.1.2, on dev br-8a4282f8e01f
[ +0.000007] ll header: 00000000: 02 42 c9 a7 cc 27 02 42 c0 a8 31 02 08 00
[Feb 2 22:44] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth9f128988
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 99 ac 32 f5 1d 08 06
[ +0.459116] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth9f128988
[ +0.141644] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth7c2d69cd
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 2a 43 93 8b c0 2f 08 06
[Feb 2 22:45] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev veth2edc7f81
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 7a ae 8a 7b c5 e7 08 06
[Feb 2 22:47] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth50a8d75b
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e a7 5f 8c 75 fb 08 06
[ +0.334257] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth09c5e587
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 69 f8 a7 97 c4 08 06
[Feb 2 22:48] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev veth7bc368b2
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 e6 b6 4f b9 bc 08 06
[Feb 2 22:50] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth31321d0b
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff de 6a 77 ab 7b b0 08 06
[Feb 2 22:52] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethfa00bd3f
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 ec 4e 28 0d d3 08 06
[Feb 2 22:55] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth20f60c22
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 47 93 f8 dd 52 08 06
*
* ==> etcd [0f277f8150b5db7f0e37fb80c81e0ffc1e807e0aee52c8ea102ccea7b763432d] <==
* {"level":"info","ts":"2022-02-02T23:01:43.514Z","caller":"traceutil/trace.go:171","msg":"trace[1386229110] linearizableReadLoop","detail":"{readStateIndex:578; appliedIndex:577; }","duration":"540.903159ms","start":"2022-02-02T23:01:42.973Z","end":"2022-02-02T23:01:43.513Z","steps":["trace[1386229110] 'read index received' (duration: 414.921779ms)","trace[1386229110] 'applied index is now lower than readState.Index' (duration: 125.98002ms)"],"step_count":2}
{"level":"info","ts":"2022-02-02T23:01:43.514Z","caller":"traceutil/trace.go:171","msg":"trace[33138573] transaction","detail":"{read_only:false; response_revision:526; number_of_response:1; }","duration":"695.152916ms","start":"2022-02-02T23:01:42.818Z","end":"2022-02-02T23:01:43.514Z","steps":["trace[33138573] 'process raft request' (duration: 569.1727ms)","trace[33138573] 'compare' (duration: 125.607775ms)"],"step_count":2}
{"level":"warn","ts":"2022-02-02T23:01:43.514Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-02-02T23:01:42.818Z","time spent":"695.209968ms","remote":"127.0.0.1:60326","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.67.2\" mod_revision:524 > success:<request_put:<key:\"/registry/masterleases/192.168.67.2\" value_size:67 lease:2289938284297612558 >> failure:<request_range:<key:\"/registry/masterleases/192.168.67.2\" > >"}
{"level":"warn","ts":"2022-02-02T23:01:43.514Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"541.166801ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-02-02T23:01:43.514Z","caller":"traceutil/trace.go:171","msg":"trace[1513657706] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:526; }","duration":"541.244827ms","start":"2022-02-02T23:01:42.973Z","end":"2022-02-02T23:01:43.514Z","steps":["trace[1513657706] 'agreement among raft nodes before linearized reading' (duration: 541.086841ms)"],"step_count":1}
{"level":"warn","ts":"2022-02-02T23:01:43.514Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-02-02T23:01:42.973Z","time spent":"541.301063ms","remote":"127.0.0.1:60464","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":29,"request content":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true "}
{"level":"info","ts":"2022-02-02T23:01:43.733Z","caller":"traceutil/trace.go:171","msg":"trace[468956564] linearizableReadLoop","detail":"{readStateIndex:578; appliedIndex:578; }","duration":"219.573338ms","start":"2022-02-02T23:01:43.514Z","end":"2022-02-02T23:01:43.733Z","steps":["trace[468956564] 'read index received' (duration: 219.561138ms)","trace[468956564] 'applied index is now lower than readState.Index' (duration: 10.783µs)"],"step_count":2}
{"level":"warn","ts":"2022-02-02T23:01:43.763Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"355.741054ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true ","response":"range_response_count:0 size:7"}
{"level":"warn","ts":"2022-02-02T23:01:43.763Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"647.944794ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-02-02T23:01:43.763Z","caller":"traceutil/trace.go:171","msg":"trace[1389431640] range","detail":"{range_begin:/registry/deployments/; range_end:/registry/deployments0; response_count:0; response_revision:526; }","duration":"355.843887ms","start":"2022-02-02T23:01:43.407Z","end":"2022-02-02T23:01:43.763Z","steps":["trace[1389431640] 'agreement among raft nodes before linearized reading' (duration: 326.385688ms)","trace[1389431640] 'count revisions from in-memory index tree' (duration: 29.317574ms)"],"step_count":2}
{"level":"warn","ts":"2022-02-02T23:01:43.763Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-02-02T23:01:43.407Z","time spent":"355.904581ms","remote":"127.0.0.1:60546","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":31,"request content":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true "}
{"level":"warn","ts":"2022-02-02T23:01:43.763Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"455.34688ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/offline-containerd-20220202225402-591014\" ","response":"range_response_count:1 size:4987"}
{"level":"info","ts":"2022-02-02T23:01:43.763Z","caller":"traceutil/trace.go:171","msg":"trace[720021103] range","detail":"{range_begin:/registry/minions/offline-containerd-20220202225402-591014; range_end:; response_count:1; response_revision:526; }","duration":"455.37707ms","start":"2022-02-02T23:01:43.307Z","end":"2022-02-02T23:01:43.763Z","steps":["trace[720021103] 'agreement among raft nodes before linearized reading' (duration: 425.824052ms)","trace[720021103] 'range keys from in-memory index tree' (duration: 29.478691ms)"],"step_count":2}
{"level":"warn","ts":"2022-02-02T23:01:43.763Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"247.177552ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
{"level":"info","ts":"2022-02-02T23:01:43.763Z","caller":"traceutil/trace.go:171","msg":"trace[821912183] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:526; }","duration":"648.02987ms","start":"2022-02-02T23:01:43.115Z","end":"2022-02-02T23:01:43.763Z","steps":["trace[821912183] 'agreement among raft nodes before linearized reading' (duration: 618.638599ms)","trace[821912183] 'range keys from in-memory index tree' (duration: 29.292672ms)"],"step_count":2}
{"level":"info","ts":"2022-02-02T23:01:43.763Z","caller":"traceutil/trace.go:171","msg":"trace[928754499] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:526; }","duration":"247.205478ms","start":"2022-02-02T23:01:43.516Z","end":"2022-02-02T23:01:43.763Z","steps":["trace[928754499] 'agreement among raft nodes before linearized reading' (duration: 217.713911ms)","trace[928754499] 'range keys from in-memory index tree' (duration: 29.441172ms)"],"step_count":2}
{"level":"warn","ts":"2022-02-02T23:01:43.763Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-02-02T23:01:43.307Z","time spent":"455.420485ms","remote":"127.0.0.1:60376","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":5011,"request content":"key:\"/registry/minions/offline-containerd-20220202225402-591014\" "}
{"level":"warn","ts":"2022-02-02T23:01:43.763Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"420.55544ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-02-02T23:01:43.763Z","caller":"traceutil/trace.go:171","msg":"trace[1205669279] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:526; }","duration":"420.820175ms","start":"2022-02-02T23:01:43.342Z","end":"2022-02-02T23:01:43.763Z","steps":["trace[1205669279] 'agreement among raft nodes before linearized reading' (duration: 391.101211ms)","trace[1205669279] 'range keys from in-memory index tree' (duration: 29.440357ms)"],"step_count":2}
{"level":"warn","ts":"2022-02-02T23:01:43.763Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-02-02T23:01:43.115Z","time spent":"648.163706ms","remote":"127.0.0.1:60390","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
{"level":"warn","ts":"2022-02-02T23:01:53.104Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"250.980967ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
{"level":"info","ts":"2022-02-02T23:01:53.104Z","caller":"traceutil/trace.go:171","msg":"trace[1238885548] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:535; }","duration":"251.088867ms","start":"2022-02-02T23:01:52.853Z","end":"2022-02-02T23:01:53.104Z","steps":["trace[1238885548] 'agreement among raft nodes before linearized reading' (duration: 85.119057ms)","trace[1238885548] 'range keys from in-memory index tree' (duration: 165.800848ms)"],"step_count":2}
{"level":"info","ts":"2022-02-02T23:02:29.450Z","caller":"traceutil/trace.go:171","msg":"trace[1184670634] linearizableReadLoop","detail":"{readStateIndex:606; appliedIndex:606; }","duration":"142.840267ms","start":"2022-02-02T23:02:29.307Z","end":"2022-02-02T23:02:29.450Z","steps":["trace[1184670634] 'read index received' (duration: 142.828812ms)","trace[1184670634] 'applied index is now lower than readState.Index' (duration: 10.22µs)"],"step_count":2}
{"level":"warn","ts":"2022-02-02T23:02:29.539Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"232.009837ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/offline-containerd-20220202225402-591014\" ","response":"range_response_count:1 size:4987"}
{"level":"info","ts":"2022-02-02T23:02:29.539Z","caller":"traceutil/trace.go:171","msg":"trace[358467332] range","detail":"{range_begin:/registry/minions/offline-containerd-20220202225402-591014; range_end:; response_count:1; response_revision:544; }","duration":"232.118063ms","start":"2022-02-02T23:02:29.307Z","end":"2022-02-02T23:02:29.539Z","steps":["trace[358467332] 'agreement among raft nodes before linearized reading' (duration: 143.049419ms)","trace[358467332] 'range keys from in-memory index tree' (duration: 88.920884ms)"],"step_count":2}
*
* ==> kernel <==
* 23:02:51 up 6:45, 0 users, load average: 6.81, 4.32, 2.31
Linux offline-containerd-20220202225402-591014 5.11.0-1029-gcp #33~20.04.3-Ubuntu SMP Tue Jan 18 12:03:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.2 LTS"
*
* ==> kube-apiserver [e0e28a37533699f0a58d46a825b32037217b0e8a81ef7311a774fd750c5ffc4b] <==
* I0202 22:58:32.120221 1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
I0202 22:58:32.126184 1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
I0202 22:58:32.126225 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
I0202 22:58:32.655294 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0202 22:58:32.705152 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0202 22:58:32.818776 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
W0202 22:58:32.826120 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
I0202 22:58:32.827624 1 controller.go:611] quota admission added evaluator for: endpoints
I0202 22:58:32.832452 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0202 22:58:33.323029 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0202 22:58:34.435905 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0202 22:58:34.451733 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
I0202 22:58:34.471806 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0202 22:58:34.805625 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0202 22:58:35.651700 1 trace.go:205] Trace[561151792]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector,user-agent:kube-controller-manager/v1.23.2 (linux/amd64) kubernetes/9d14243/kube-controller-manager,audit-id:acfc5593-29d0-4efb-8106-3238f5af806d,client:192.168.67.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (02-Feb-2022 22:58:35.141) (total time: 509ms):
Trace[561151792]: [509.810916ms] [509.810916ms] END
I0202 22:58:37.653661 1 trace.go:205] Trace[1397254885]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/horizontal-pod-autoscaler,user-agent:kube-controller-manager/v1.23.2 (linux/amd64) kubernetes/9d14243/tokens-controller,audit-id:e42912a2-0362-4fcf-af0b-dcb2a2a9c0c6,client:192.168.67.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (02-Feb-2022 22:58:37.048) (total time: 604ms):
Trace[1397254885]: ---"About to write a response" 604ms (22:58:37.653)
Trace[1397254885]: [604.831339ms] [604.831339ms] END
I0202 22:58:48.224843 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
I0202 22:58:48.225131 1 controller.go:611] quota admission added evaluator for: replicasets.apps
I0202 22:58:54.722917 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
I0202 23:01:43.514804 1 trace.go:205] Trace[2003737648]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (02-Feb-2022 23:01:42.816) (total time: 697ms):
Trace[2003737648]: ---"Transaction committed" 696ms (23:01:43.514)
Trace[2003737648]: [697.959855ms] [697.959855ms] END
*
* ==> kube-controller-manager [2c0abeaf6dcea6767ef83bb10901ca9211773b51b9ba96e363f81bce143909af] <==
* I0202 22:58:48.303503 1 shared_informer.go:247] Caches are synced for GC
I0202 22:58:48.303803 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
I0202 22:58:48.303886 1 shared_informer.go:247] Caches are synced for job
I0202 22:58:48.304021 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
I0202 22:58:48.305817 1 shared_informer.go:247] Caches are synced for crt configmap
I0202 22:58:48.311925 1 event.go:294] "Event occurred" object="kube-system/etcd-offline-containerd-20220202225402-591014" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0202 22:58:48.312157 1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-offline-containerd-20220202225402-591014" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0202 22:58:48.312353 1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager-offline-containerd-20220202225402-591014" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0202 22:58:48.312402 1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-cmk6t"
I0202 22:58:48.312423 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jp747"
I0202 22:58:48.318811 1 event.go:294] "Event occurred" object="kube-system/kube-scheduler-offline-containerd-20220202225402-591014" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0202 22:58:48.320761 1 shared_informer.go:247] Caches are synced for cronjob
I0202 22:58:48.330291 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-rvc7m"
I0202 22:58:48.404056 1 shared_informer.go:247] Caches are synced for persistent volume
I0202 22:58:48.404266 1 shared_informer.go:247] Caches are synced for resource quota
I0202 22:58:48.404292 1 shared_informer.go:247] Caches are synced for resource quota
I0202 22:58:48.425550 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-s7s2k"
I0202 22:58:48.426181 1 shared_informer.go:247] Caches are synced for service account
I0202 22:58:48.503081 1 shared_informer.go:247] Caches are synced for attach detach
I0202 22:58:48.503370 1 shared_informer.go:247] Caches are synced for namespace
I0202 22:58:48.635942 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
I0202 22:58:48.644176 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-s7s2k"
I0202 22:58:48.917555 1 shared_informer.go:247] Caches are synced for garbage collector
I0202 22:58:48.917588 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0202 22:58:48.923025 1 shared_informer.go:247] Caches are synced for garbage collector
*
* ==> kube-proxy [9fd761cf0954478fa805325c66b97f2c906b3cfb9df510895e607c58fc7721cd] <==
* I0202 22:58:54.685169 1 node.go:163] Successfully retrieved node IP: 192.168.67.2
I0202 22:58:54.685251 1 server_others.go:138] "Detected node IP" address="192.168.67.2"
I0202 22:58:54.685291 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0202 22:58:54.715898 1 server_others.go:206] "Using iptables Proxier"
I0202 22:58:54.715939 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0202 22:58:54.715972 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0202 22:58:54.715999 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0202 22:58:54.718722 1 server.go:656] "Version info" version="v1.23.2"
I0202 22:58:54.719685 1 config.go:317] "Starting service config controller"
I0202 22:58:54.720627 1 shared_informer.go:240] Waiting for caches to sync for service config
I0202 22:58:54.720186 1 config.go:226] "Starting endpoint slice config controller"
I0202 22:58:54.720705 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0202 22:58:54.821820 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0202 22:58:54.821849 1 shared_informer.go:247] Caches are synced for service config
*
* ==> kube-scheduler [317793a76681b3aafbe66a081ebada97b673cc07b644f8724daf71b79fa8e37f] <==
* W0202 22:58:31.236113 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0202 22:58:31.236809 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0202 22:58:31.237161 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0202 22:58:31.237191 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0202 22:58:31.237164 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0202 22:58:31.237263 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0202 22:58:31.237279 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0202 22:58:31.237306 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0202 22:58:31.237485 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0202 22:58:31.237514 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0202 22:58:31.238054 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0202 22:58:31.238286 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0202 22:58:32.068572 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0202 22:58:32.068618 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0202 22:58:32.360069 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0202 22:58:32.360107 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0202 22:58:32.408699 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0202 22:58:32.408743 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0202 22:58:32.422632 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0202 22:58:32.422702 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0202 22:58:32.442644 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0202 22:58:32.442922 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0202 22:58:32.463451 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0202 22:58:32.463490 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0202 22:58:34.628069 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Wed 2022-02-02 22:58:08 UTC, end at Wed 2022-02-02 23:02:51 UTC. --
Feb 02 23:00:55 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:00:55.187751 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:01:00 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:01:00.189039 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:01:05 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:01:05.190042 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:01:10 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:01:10.191266 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:01:15 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:01:15.192588 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:01:20 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:01:20.194278 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:01:25 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:01:25.195378 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:01:30 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:01:30.196016 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:01:35 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:01:35.197240 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:01:40 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:01:40.198258 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:01:45 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:01:45.199125 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:01:47 offline-containerd-20220202225402-591014 kubelet[1524]: I0202 23:01:47.311918 1524 scope.go:110] "RemoveContainer" containerID="ed251c3b5772996e4ce9060d71b240c7a57c1d847dd41a2d4e27a5f6277fe65c"
Feb 02 23:01:50 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:01:50.200722 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:01:55 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:01:55.201683 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:02:00 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:02:00.202603 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:02:05 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:02:05.204406 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:02:10 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:02:10.205768 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:02:15 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:02:15.207151 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:02:20 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:02:20.208506 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:02:25 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:02:25.210177 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:02:30 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:02:30.211522 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:02:35 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:02:35.212971 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:02:40 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:02:40.214344 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:02:45 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:02:45.215280 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 02 23:02:50 offline-containerd-20220202225402-591014 kubelet[1524]: E0202 23:02:50.216043 1524 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
-- /stdout --
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p offline-containerd-20220202225402-591014 -n offline-containerd-20220202225402-591014
helpers_test.go:262: (dbg) Run: kubectl --context offline-containerd-20220202225402-591014 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-64897985d-rvc7m storage-provisioner
helpers_test.go:273: ======> post-mortem[TestOffline]: describe non-running pods <======
helpers_test.go:276: (dbg) Run: kubectl --context offline-containerd-20220202225402-591014 describe pod coredns-64897985d-rvc7m storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context offline-containerd-20220202225402-591014 describe pod coredns-64897985d-rvc7m storage-provisioner: exit status 1 (53.641504ms)
** stderr **
Error from server (NotFound): pods "coredns-64897985d-rvc7m" not found
Error from server (NotFound): pods "storage-provisioner" not found
** /stderr **
helpers_test.go:278: kubectl --context offline-containerd-20220202225402-591014 describe pod coredns-64897985d-rvc7m storage-provisioner: exit status 1
helpers_test.go:176: Cleaning up "offline-containerd-20220202225402-591014" profile ...
helpers_test.go:179: (dbg) Run: out/minikube-linux-amd64 delete -p offline-containerd-20220202225402-591014
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20220202225402-591014: (2.975692923s)
--- FAIL: TestOffline (532.49s)