Found 8 cores, limiting parallelism with --test.parallel=4 === RUN TestDownloadOnly === RUN TestDownloadOnly/v1.14.0 === RUN TestDownloadOnly/v1.14.0/json-events aaa_download_only_test.go:69: (dbg) Run: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210507214926-391940 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=containerd --driver=docker --container-runtime=containerd aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210507214926-391940 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=containerd --driver=docker --container-runtime=containerd: (8.962092871s) === RUN TestDownloadOnly/v1.14.0/preload-exists === RUN TestDownloadOnly/v1.14.0/cached-images aaa_download_only_test.go:117: Preload exists, images won't be cached === RUN TestDownloadOnly/v1.14.0/binaries === RUN TestDownloadOnly/v1.14.0/kubectl aaa_download_only_test.go:149: Test for darwin and windows === RUN TestDownloadOnly/v1.14.0/LogsDuration aaa_download_only_test.go:166: (dbg) Run: out/minikube-linux-amd64 logs -p download-only-20210507214926-391940 aaa_download_only_test.go:166: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210507214926-391940: exit status 85 (76.081594ms) -- stdout -- * * ==> Audit <== * |---------|------|---------|------|---------|------------|----------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|------|---------|------|---------|------------|----------| |---------|------|---------|------|---------|------------|----------| * * ==> Last Start <== * Log file created at: 2021/05/07 21:49:26 Running on machine: debian-jenkins-agent-11 Binary: Built with gc go1.16.1 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0507 21:49:26.240597 391953 out.go:291] Setting OutFile to fd 1 ... I0507 21:49:26.240722 391953 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 21:49:26.240730 391953 out.go:304] Setting ErrFile to fd 2... I0507 21:49:26.240733 391953 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 21:49:26.240821 391953 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/bin W0507 21:49:26.240919 391953 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/config/config.json: no such file or directory I0507 21:49:26.241144 391953 out.go:298] Setting JSON to true I0507 21:49:26.275812 391953 start.go:108] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":8934,"bootTime":1620415232,"procs":148,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"} I0507 21:49:26.275917 391953 start.go:118] virtualization: kvm guest I0507 21:49:26.279810 391953 notify.go:169] Checking for updates... W0507 21:49:26.280053 391953 out.go:424] no arguments passed for "minikube skips various validations when --force is supplied; this may lead to unexpected behavior\n" - returning raw string I0507 21:49:26.281807 391953 driver.go:322] Setting default libvirt URI to qemu:///system I0507 21:49:26.326522 391953 docker.go:119] docker version: linux-19.03.15 I0507 21:49:26.326618 391953 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0507 21:49:26.402784 391953 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:50 SystemTime:2021-05-07 21:49:26.359237229 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0507 21:49:26.402858 391953 docker.go:225] overlay module found I0507 21:49:26.405360 391953 start.go:276] selected driver: docker I0507 21:49:26.405375 391953 start.go:718] validating driver "docker" against I0507 21:49:26.405820 391953 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0507 21:49:26.482619 391953 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:50 SystemTime:2021-05-07 21:49:26.439257002 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0507 21:49:26.482738 391953 start_flags.go:259] no existing cluster config was found, will generate one from the flags I0507 21:49:26.483226 391953 start_flags.go:314] Using suggested 8000MB memory alloc based on sys=32179MB, container=32179MB I0507 21:49:26.483367 391953 start_flags.go:715] Wait components to verify : map[apiserver:true system_pods:true] I0507 21:49:26.483429 391953 cni.go:93] Creating CNI manager for "" I0507 21:49:26.483497 391953 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet I0507 21:49:26.483556 391953 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk" I0507 21:49:26.483568 391953 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk" I0507 21:49:26.483577 391953 start_flags.go:268] Found "CNI" CNI - setting NetworkPlugin=cni I0507 21:49:26.483587 391953 start_flags.go:273] config: {Name:download-only-20210507214926-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210507214926-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0507 21:49:26.486186 391953 cache.go:111] Beginning downloading kic base image for docker with containerd W0507 21:49:26.486209 391953 out.go:424] no arguments passed for "Pulling base image ...\n" - returning raw string I0507 21:49:26.487972 391953 preload.go:98] Checking if preload exists for k8s version v1.14.0 and runtime containerd I0507 21:49:26.488171 391953 image.go:116] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory I0507 21:49:26.488236 391953 cache.go:134] Downloading gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e to local cache I0507 21:49:26.488299 391953 image.go:192] Writing gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e to local cache I0507 21:49:26.531021 391953 preload.go:123] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.14.0-containerd-overlay2-amd64.tar.lz4 I0507 21:49:26.531072 391953 cache.go:54] Caching tarball of preloaded images I0507 21:49:26.531121 391953 preload.go:98] Checking if preload exists for k8s version v1.14.0 and runtime containerd I0507 21:49:26.569691 391953 preload.go:123] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.14.0-containerd-overlay2-amd64.tar.lz4 I0507 21:49:26.572004 391953 preload.go:196] getting checksum for preloaded-images-k8s-v10-v1.14.0-containerd-overlay2-amd64.tar.lz4 ... I0507 21:49:26.630245 391953 download.go:78] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.14.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:f0bc4335eb1ef39b3e6763fea0899135 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.14.0-containerd-overlay2-amd64.tar.lz4 * * The control plane node "" does not exist. To start a cluster, run: "minikube start -p download-only-20210507214926-391940" -- /stdout -- aaa_download_only_test.go:167: minikube logs failed with error: exit status 85 === RUN TestDownloadOnly/v1.20.2 === RUN TestDownloadOnly/v1.20.2/json-events aaa_download_only_test.go:69: (dbg) Run: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210507214926-391940 --force --alsologtostderr --kubernetes-version=v1.20.2 --container-runtime=containerd --driver=docker --container-runtime=containerd aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210507214926-391940 --force --alsologtostderr --kubernetes-version=v1.20.2 --container-runtime=containerd --driver=docker --container-runtime=containerd: (9.173878218s) === RUN TestDownloadOnly/v1.20.2/preload-exists === RUN TestDownloadOnly/v1.20.2/cached-images aaa_download_only_test.go:117: Preload exists, images won't be cached === RUN TestDownloadOnly/v1.20.2/binaries === RUN TestDownloadOnly/v1.20.2/kubectl aaa_download_only_test.go:149: Test for darwin and windows === RUN TestDownloadOnly/v1.20.2/LogsDuration aaa_download_only_test.go:166: (dbg) Run: out/minikube-linux-amd64 logs -p download-only-20210507214926-391940 aaa_download_only_test.go:166: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210507214926-391940: exit status 85 (74.016923ms) -- stdout -- * * ==> Audit <== * |---------|------|---------|------|---------|------------|----------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|------|---------|------|---------|------------|----------| |---------|------|---------|------|---------|------------|----------| * * ==> Last Start <== * Log file created at: 2021/05/07 21:49:35 Running on machine: debian-jenkins-agent-11 Binary: Built with gc go1.16.1 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0507 21:49:35.278119 392079 out.go:291] Setting OutFile to fd 1 ... I0507 21:49:35.278190 392079 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 21:49:35.278197 392079 out.go:304] Setting ErrFile to fd 2... I0507 21:49:35.278201 392079 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 21:49:35.278287 392079 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/bin W0507 21:49:35.278399 392079 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/config/config.json: no such file or directory I0507 21:49:35.278521 392079 out.go:298] Setting JSON to true I0507 21:49:35.312740 392079 start.go:108] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":8943,"bootTime":1620415232,"procs":148,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"} I0507 21:49:35.312810 392079 start.go:118] virtualization: kvm guest I0507 21:49:35.316111 392079 notify.go:169] Checking for updates... W0507 21:49:35.316343 392079 out.go:424] no arguments passed for "minikube skips various validations when --force is supplied; this may lead to unexpected behavior\n" - returning raw string W0507 21:49:35.318252 392079 start.go:628] api.Load failed for download-only-20210507214926-391940: filestore "download-only-20210507214926-391940": Docker machine "download-only-20210507214926-391940" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one. I0507 21:49:35.318301 392079 driver.go:322] Setting default libvirt URI to qemu:///system W0507 21:49:35.318330 392079 start.go:628] api.Load failed for download-only-20210507214926-391940: filestore "download-only-20210507214926-391940": Docker machine "download-only-20210507214926-391940" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one. I0507 21:49:35.363044 392079 docker.go:119] docker version: linux-19.03.15 I0507 21:49:35.363135 392079 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0507 21:49:35.438143 392079 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:50 SystemTime:2021-05-07 21:49:35.395127677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0507 21:49:35.438233 392079 docker.go:225] overlay module found I0507 21:49:35.440463 392079 start.go:276] selected driver: docker I0507 21:49:35.440480 392079 start.go:718] validating driver "docker" against &{Name:download-only-20210507214926-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210507214926-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0507 21:49:35.440948 392079 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0507 21:49:35.514985 392079 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:50 SystemTime:2021-05-07 21:49:35.47337244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0507 21:49:35.515489 392079 cni.go:93] Creating CNI manager for "" I0507 21:49:35.515538 392079 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet I0507 21:49:35.515555 392079 start_flags.go:273] config: {Name:download-only-20210507214926-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:download-only-20210507214926-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0507 21:49:35.517631 392079 cache.go:111] Beginning downloading kic base image for docker with containerd W0507 21:49:35.517648 392079 out.go:424] no arguments passed for "Pulling base image ...\n" - returning raw string I0507 21:49:35.519159 392079 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd I0507 21:49:35.519228 392079 image.go:116] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory I0507 21:49:35.519262 392079 image.go:119] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory, skipping pull I0507 21:49:35.519275 392079 cache.go:131] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in cache, skipping pull I0507 21:49:35.563202 392079 preload.go:123] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 I0507 21:49:35.563219 392079 cache.go:54] Caching tarball of preloaded images I0507 21:49:35.563242 392079 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd I0507 21:49:35.607628 392079 preload.go:123] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 I0507 21:49:35.609478 392079 preload.go:196] getting checksum for preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 ... I0507 21:49:35.666680 392079 download.go:78] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:02e256ea4a3f6e9463b63c57de8e1682 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 * * The control plane node "" does not exist. To start a cluster, run: "minikube start -p download-only-20210507214926-391940" -- /stdout -- aaa_download_only_test.go:167: minikube logs failed with error: exit status 85 === RUN TestDownloadOnly/v1.22.0-alpha.1 === RUN TestDownloadOnly/v1.22.0-alpha.1/json-events aaa_download_only_test.go:69: (dbg) Run: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210507214926-391940 --force --alsologtostderr --kubernetes-version=v1.22.0-alpha.1 --container-runtime=containerd --driver=docker --container-runtime=containerd aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210507214926-391940 --force --alsologtostderr --kubernetes-version=v1.22.0-alpha.1 --container-runtime=containerd --driver=docker --container-runtime=containerd: (17.277133475s) === RUN TestDownloadOnly/v1.22.0-alpha.1/preload-exists === RUN TestDownloadOnly/v1.22.0-alpha.1/cached-images aaa_download_only_test.go:117: Preload exists, images won't be cached === RUN TestDownloadOnly/v1.22.0-alpha.1/binaries === RUN TestDownloadOnly/v1.22.0-alpha.1/kubectl aaa_download_only_test.go:149: Test for darwin and windows === RUN TestDownloadOnly/v1.22.0-alpha.1/LogsDuration aaa_download_only_test.go:166: (dbg) Run: out/minikube-linux-amd64 logs -p download-only-20210507214926-391940 aaa_download_only_test.go:166: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210507214926-391940: exit status 85 (73.621095ms) -- stdout -- * * ==> Audit <== * |---------|------|---------|------|---------|------------|----------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|------|---------|------|---------|------------|----------| |---------|------|---------|------|---------|------------|----------| * * ==> Last Start <== * Log file created at: 2021/05/07 21:49:44 Running on machine: debian-jenkins-agent-11 Binary: Built with gc go1.16.1 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0507 21:49:44.526889 392207 out.go:291] Setting OutFile to fd 1 ... I0507 21:49:44.527063 392207 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 21:49:44.527072 392207 out.go:304] Setting ErrFile to fd 2... I0507 21:49:44.527075 392207 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 21:49:44.527153 392207 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/bin W0507 21:49:44.527252 392207 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/config/config.json: no such file or directory I0507 21:49:44.527345 392207 out.go:298] Setting JSON to true I0507 21:49:44.561343 392207 start.go:108] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":8952,"bootTime":1620415232,"procs":148,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"} I0507 21:49:44.561442 392207 start.go:118] virtualization: kvm guest I0507 21:49:44.564145 392207 notify.go:169] Checking for updates... W0507 21:49:44.564405 392207 out.go:424] no arguments passed for "minikube skips various validations when --force is supplied; this may lead to unexpected behavior\n" - returning raw string W0507 21:49:44.566338 392207 start.go:628] api.Load failed for download-only-20210507214926-391940: filestore "download-only-20210507214926-391940": Docker machine "download-only-20210507214926-391940" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one. I0507 21:49:44.566426 392207 driver.go:322] Setting default libvirt URI to qemu:///system W0507 21:49:44.566472 392207 start.go:628] api.Load failed for download-only-20210507214926-391940: filestore "download-only-20210507214926-391940": Docker machine "download-only-20210507214926-391940" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one. I0507 21:49:44.609065 392207 docker.go:119] docker version: linux-19.03.15 I0507 21:49:44.609161 392207 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0507 21:49:44.683312 392207 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:50 SystemTime:2021-05-07 21:49:44.640628189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0507 21:49:44.683387 392207 docker.go:225] overlay module found I0507 21:49:44.685776 392207 start.go:276] selected driver: docker I0507 21:49:44.685794 392207 start.go:718] validating driver "docker" against &{Name:download-only-20210507214926-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:download-only-20210507214926-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0507 21:49:44.686259 392207 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0507 21:49:44.760085 392207 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:50 SystemTime:2021-05-07 21:49:44.718283708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0507 21:49:44.760807 392207 cni.go:93] Creating CNI manager for "" I0507 21:49:44.760829 392207 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet I0507 21:49:44.760844 392207 start_flags.go:273] config: {Name:download-only-20210507214926-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-alpha.1 ClusterName:download-only-20210507214926-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0507 21:49:44.763148 392207 cache.go:111] Beginning downloading kic base image for docker with containerd W0507 21:49:44.763172 392207 out.go:424] no arguments passed for "Pulling base image ...\n" - returning raw string I0507 21:49:44.764668 392207 preload.go:98] Checking if preload exists for k8s version v1.22.0-alpha.1 and runtime containerd I0507 21:49:44.764716 392207 image.go:116] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory I0507 21:49:44.764741 392207 image.go:119] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory, skipping pull I0507 21:49:44.764749 392207 cache.go:131] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in cache, skipping pull I0507 21:49:44.807870 392207 preload.go:123] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.22.0-alpha.1-containerd-overlay2-amd64.tar.lz4 I0507 21:49:44.807887 392207 cache.go:54] Caching tarball of preloaded images I0507 21:49:44.807908 392207 preload.go:98] Checking if preload exists for k8s version v1.22.0-alpha.1 and runtime containerd I0507 21:49:44.847881 392207 preload.go:123] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.22.0-alpha.1-containerd-overlay2-amd64.tar.lz4 I0507 21:49:44.850028 392207 preload.go:196] getting checksum for preloaded-images-k8s-v10-v1.22.0-alpha.1-containerd-overlay2-amd64.tar.lz4 ... I0507 21:49:44.911406 392207 download.go:78] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.22.0-alpha.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:b27a383b22d0591a90cea87635e51b90 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.22.0-alpha.1-containerd-overlay2-amd64.tar.lz4 I0507 21:49:51.619340 392207 preload.go:206] saving checksum for preloaded-images-k8s-v10-v1.22.0-alpha.1-containerd-overlay2-amd64.tar.lz4 ... I0507 21:49:58.548245 392207 preload.go:218] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.22.0-alpha.1-containerd-overlay2-amd64.tar.lz4 ... * * The control plane node "" does not exist. To start a cluster, run: "minikube start -p download-only-20210507214926-391940" -- /stdout -- aaa_download_only_test.go:167: minikube logs failed with error: exit status 85 === RUN TestDownloadOnly/DeleteAll aaa_download_only_test.go:184: (dbg) Run: out/minikube-linux-amd64 delete --all aaa_download_only_test.go:184: (dbg) Done: out/minikube-linux-amd64 delete --all: (1.662944474s) === RUN TestDownloadOnly/DeleteAlwaysSucceeds aaa_download_only_test.go:196: (dbg) Run: out/minikube-linux-amd64 delete -p download-only-20210507214926-391940 === CONT TestDownloadOnly helpers_test.go:171: Cleaning up "download-only-20210507214926-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p download-only-20210507214926-391940 --- PASS: TestDownloadOnly (37.90s) --- PASS: TestDownloadOnly/v1.14.0 (9.04s) --- PASS: TestDownloadOnly/v1.14.0/json-events (8.96s) --- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s) --- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s) --- PASS: TestDownloadOnly/v1.14.0/binaries (0.00s) --- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s) --- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.08s) --- PASS: TestDownloadOnly/v1.20.2 (9.25s) --- PASS: TestDownloadOnly/v1.20.2/json-events (9.17s) --- PASS: TestDownloadOnly/v1.20.2/preload-exists (0.00s) --- SKIP: TestDownloadOnly/v1.20.2/cached-images (0.00s) --- PASS: TestDownloadOnly/v1.20.2/binaries (0.00s) --- SKIP: TestDownloadOnly/v1.20.2/kubectl (0.00s) --- PASS: TestDownloadOnly/v1.20.2/LogsDuration (0.07s) --- PASS: TestDownloadOnly/v1.22.0-alpha.1 (17.35s) --- PASS: TestDownloadOnly/v1.22.0-alpha.1/json-events (17.28s) --- PASS: TestDownloadOnly/v1.22.0-alpha.1/preload-exists (0.00s) --- SKIP: TestDownloadOnly/v1.22.0-alpha.1/cached-images (0.00s) --- PASS: TestDownloadOnly/v1.22.0-alpha.1/binaries (0.00s) --- SKIP: TestDownloadOnly/v1.22.0-alpha.1/kubectl (0.00s) --- PASS: TestDownloadOnly/v1.22.0-alpha.1/LogsDuration (0.07s) --- PASS: TestDownloadOnly/DeleteAll (1.66s) --- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.30s) === RUN TestDownloadOnlyKic aaa_download_only_test.go:221: (dbg) Run: out/minikube-linux-amd64 start --download-only -p download-docker-20210507215004-391940 --force --alsologtostderr --driver=docker --container-runtime=containerd aaa_download_only_test.go:221: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20210507215004-391940 --force --alsologtostderr --driver=docker --container-runtime=containerd: (1.884670731s) helpers_test.go:171: Cleaning up "download-docker-20210507215004-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p download-docker-20210507215004-391940 --- PASS: TestDownloadOnlyKic (4.09s) === RUN TestOffline === PAUSE TestOffline === RUN TestAddons addons_test.go:75: (dbg) Run: out/minikube-linux-amd64 start -p addons-20210507215008-391940 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --driver=docker --container-runtime=containerd --addons=ingress --addons=helm-tiller addons_test.go:75: (dbg) Done: out/minikube-linux-amd64 start -p addons-20210507215008-391940 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --driver=docker --container-runtime=containerd --addons=ingress --addons=helm-tiller: (2m49.045456727s) addons_test.go:84: (dbg) Run: out/minikube-linux-amd64 -p addons-20210507215008-391940 addons enable gcp-auth addons_test.go:84: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210507215008-391940 addons enable gcp-auth: exit status 15 (68.121835ms) -- stdout -- -- /stdout -- ** stderr ** * Exiting due to MK_CREDENTIALS_NOT_NEEDED: It seems that you are running in GCE, which means authentication should work without the GCP Auth addon. If you would still like to authenticate using a credentials file, use the --force flag. ** /stderr ** addons_test.go:94: (dbg) Run: out/minikube-linux-amd64 -p addons-20210507215008-391940 addons enable gcp-auth --force addons_test.go:94: (dbg) Done: out/minikube-linux-amd64 -p addons-20210507215008-391940 addons enable gcp-auth --force: (20.475812872s) === RUN TestAddons/parallel === RUN TestAddons/parallel/Registry === PAUSE TestAddons/parallel/Registry === RUN TestAddons/parallel/Ingress === PAUSE TestAddons/parallel/Ingress === RUN TestAddons/parallel/MetricsServer === PAUSE TestAddons/parallel/MetricsServer === RUN TestAddons/parallel/HelmTiller === PAUSE TestAddons/parallel/HelmTiller === RUN TestAddons/parallel/Olm === PAUSE TestAddons/parallel/Olm === RUN TestAddons/parallel/CSI === PAUSE TestAddons/parallel/CSI === RUN TestAddons/parallel/GCPAuth === PAUSE TestAddons/parallel/GCPAuth === CONT TestAddons/parallel/Registry === CONT TestAddons/parallel/Olm addons_test.go:465: skipping olm test till this issue is fixed https://github.com/kubernetes/minikube/issues/11311 === CONT TestAddons/parallel/GCPAuth === CONT TestAddons/parallel/MetricsServer === CONT TestAddons/parallel/GCPAuth addons_test.go:632: (dbg) Run: kubectl --context addons-20210507215008-391940 create -f testdata/busybox.yaml === CONT TestAddons/parallel/HelmTiller === CONT TestAddons/parallel/Registry addons_test.go:297: registry stabilized in 14.301018ms === CONT TestAddons/parallel/MetricsServer addons_test.go:374: metrics-server stabilized in 14.661778ms === CONT TestAddons/parallel/HelmTiller addons_test.go:423: tiller-deploy stabilized in 14.86943ms === CONT TestAddons/parallel/Registry addons_test.go:299: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ... === CONT TestAddons/parallel/MetricsServer addons_test.go:376: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ... === CONT TestAddons/parallel/HelmTiller addons_test.go:425: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ... === CONT TestAddons/parallel/MetricsServer helpers_test.go:335: "metrics-server-7894db45f8-qf4dh" [4c32eac9-bdf8-4f04-8819-f21b3868130a] Running === CONT TestAddons/parallel/Registry helpers_test.go:335: "registry-dbwln" [f1184bbf-8eb7-4995-b123-d3a653788fe1] Running === CONT TestAddons/parallel/HelmTiller helpers_test.go:335: "tiller-deploy-7c86b7fbdf-ztctb" [de250d5c-e3a2-4cae-9ae0-2d7522ceceb3] Running === CONT TestAddons/parallel/GCPAuth addons_test.go:638: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ... helpers_test.go:335: "busybox" [eb32c375-34a5-4d1d-bfc8-01ce23948eb4] Pending helpers_test.go:335: "busybox" [eb32c375-34a5-4d1d-bfc8-01ce23948eb4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox]) helpers_test.go:335: "busybox" [eb32c375-34a5-4d1d-bfc8-01ce23948eb4] Running === CONT TestAddons/parallel/MetricsServer addons_test.go:376: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014082277s === CONT TestAddons/parallel/Registry addons_test.go:299: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.014497608s === CONT TestAddons/parallel/MetricsServer addons_test.go:382: (dbg) Run: kubectl --context addons-20210507215008-391940 top pods -n kube-system === CONT TestAddons/parallel/HelmTiller addons_test.go:425: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.015240907s addons_test.go:440: (dbg) Run: kubectl --context addons-20210507215008-391940 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version === CONT TestAddons/parallel/Registry addons_test.go:302: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ... helpers_test.go:335: "registry-proxy-qj6bf" [0926320e-2a48-4649-adda-ff0e8a5c4c01] Running === CONT TestAddons/parallel/MetricsServer addons_test.go:399: (dbg) Run: out/minikube-linux-amd64 -p addons-20210507215008-391940 addons disable metrics-server --alsologtostderr -v=1 === CONT TestAddons/parallel/Ingress addons_test.go:158: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "ingress-nginx" ... helpers_test.go:335: "ingress-nginx-admission-create-4qmw9" [85fd4547-9fae-40ef-8e05-9ff385e0ed84] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted addons_test.go:158: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 54.840652ms addons_test.go:165: (dbg) Run: kubectl --context addons-20210507215008-391940 replace --force -f testdata/nginx-ingv1beta.yaml addons_test.go:170: kubectl --context addons-20210507215008-391940 replace --force -f testdata/nginx-ingv1beta.yaml: unexpected stderr: Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress (may be temporary) addons_test.go:180: (dbg) Run: kubectl --context addons-20210507215008-391940 replace --force -f testdata/nginx-pod-svc.yaml addons_test.go:185: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ... helpers_test.go:335: "nginx" [bdfee180-2959-4ab0-b32a-b139f54bb97c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx]) === CONT TestAddons/parallel/GCPAuth addons_test.go:638: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 8.007132184s addons_test.go:644: (dbg) Run: kubectl --context addons-20210507215008-391940 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS" addons_test.go:681: (dbg) Run: kubectl --context addons-20210507215008-391940 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT" addons_test.go:697: (dbg) Run: kubectl --context addons-20210507215008-391940 apply -f testdata/private-image.yaml === CONT TestAddons/parallel/HelmTiller addons_test.go:440: (dbg) Done: kubectl --context addons-20210507215008-391940 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (3.664716093s) addons_test.go:445: kubectl --context addons-20210507215008-391940 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file If you don't see a command prompt, try pressing enter. Error attaching, falling back to logs: addons_test.go:457: (dbg) Run: out/minikube-linux-amd64 -p addons-20210507215008-391940 addons disable helm-tiller --alsologtostderr -v=1 === CONT TestAddons/parallel/GCPAuth addons_test.go:704: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ... === CONT TestAddons/parallel/CSI addons_test.go:540: csi-hostpath-driver pods stabilized in 73.649264ms addons_test.go:543: (dbg) Run: kubectl --context addons-20210507215008-391940 create -f testdata/csi-hostpath-driver/pvc.yaml addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ... helpers_test.go:385: (dbg) Run: kubectl --context addons-20210507215008-391940 get pvc hpvc -o jsonpath={.status.phase} -n default addons_test.go:553: (dbg) Run: kubectl --context addons-20210507215008-391940 create -f testdata/csi-hostpath-driver/pv-pod.yaml === CONT TestAddons/parallel/GCPAuth helpers_test.go:335: "private-image-7ff9c8c74f-hz2rf" [ced80f0b-49a9-4fd1-8324-e2c2b29c7244] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image]) === CONT TestAddons/parallel/CSI addons_test.go:558: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ... helpers_test.go:335: "task-pv-pod" [761cad11-daa2-499c-8e57-791b77aac221] Pending === CONT TestAddons/parallel/Registry addons_test.go:302: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.02293997s addons_test.go:307: (dbg) Run: kubectl --context addons-20210507215008-391940 delete po -l run=registry-test --now addons_test.go:312: (dbg) Run: kubectl --context addons-20210507215008-391940 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local" === CONT TestAddons/parallel/Ingress helpers_test.go:335: "nginx" [bdfee180-2959-4ab0-b32a-b139f54bb97c] Running === CONT TestAddons/parallel/CSI helpers_test.go:335: "task-pv-pod" [761cad11-daa2-499c-8e57-791b77aac221] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container]) === CONT TestAddons/parallel/GCPAuth helpers_test.go:335: "private-image-7ff9c8c74f-hz2rf" [ced80f0b-49a9-4fd1-8324-e2c2b29c7244] Running === CONT TestAddons/parallel/Ingress addons_test.go:185: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.008174059s addons_test.go:204: (dbg) Run: out/minikube-linux-amd64 -p addons-20210507215008-391940 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'" === CONT TestAddons/parallel/Registry addons_test.go:312: (dbg) Done: kubectl --context addons-20210507215008-391940 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.608716707s) addons_test.go:326: (dbg) Run: out/minikube-linux-amd64 -p addons-20210507215008-391940 ip === CONT TestAddons/parallel/Ingress addons_test.go:230: (dbg) Run: kubectl --context addons-20210507215008-391940 replace --force -f testdata/nginx-ingv1.yaml 2021/05/07 21:53:34 [DEBUG] GET http://192.168.58.2:5000 === CONT TestAddons/parallel/Registry addons_test.go:355: (dbg) Run: out/minikube-linux-amd64 -p addons-20210507215008-391940 addons disable registry --alsologtostderr -v=1 === CONT TestAddons/parallel/Ingress addons_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p addons-20210507215008-391940 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'" addons_test.go:278: (dbg) Run: out/minikube-linux-amd64 -p addons-20210507215008-391940 addons disable ingress --alsologtostderr -v=1 === CONT TestAddons/parallel/GCPAuth addons_test.go:704: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image healthy within 13.007543103s addons_test.go:710: (dbg) Run: out/minikube-linux-amd64 -p addons-20210507215008-391940 addons disable gcp-auth --alsologtostderr -v=1 === CONT TestAddons/parallel/CSI helpers_test.go:335: "task-pv-pod" [761cad11-daa2-499c-8e57-791b77aac221] Running addons_test.go:558: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.007914257s addons_test.go:563: (dbg) Run: kubectl --context addons-20210507215008-391940 create -f testdata/csi-hostpath-driver/snapshot.yaml addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ... helpers_test.go:410: (dbg) Run: kubectl --context addons-20210507215008-391940 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default helpers_test.go:410: (dbg) Run: kubectl --context addons-20210507215008-391940 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default addons_test.go:573: (dbg) Run: kubectl --context addons-20210507215008-391940 delete pod task-pv-pod addons_test.go:573: (dbg) Done: kubectl --context addons-20210507215008-391940 delete pod task-pv-pod: (6.186074486s) addons_test.go:579: (dbg) Run: kubectl --context addons-20210507215008-391940 delete pvc hpvc addons_test.go:585: (dbg) Run: kubectl --context addons-20210507215008-391940 create -f testdata/csi-hostpath-driver/pvc-restore.yaml addons_test.go:590: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ... helpers_test.go:385: (dbg) Run: kubectl --context addons-20210507215008-391940 get pvc hpvc-restore -o jsonpath={.status.phase} -n default addons_test.go:595: (dbg) Run: kubectl --context addons-20210507215008-391940 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml addons_test.go:600: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ... helpers_test.go:335: "task-pv-pod-restore" [6ac2d1d2-91d3-4c59-a041-8fbf65c96c76] Pending helpers_test.go:335: "task-pv-pod-restore" [6ac2d1d2-91d3-4c59-a041-8fbf65c96c76] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container]) === CONT TestAddons/parallel/Ingress addons_test.go:278: (dbg) Done: out/minikube-linux-amd64 -p addons-20210507215008-391940 addons disable ingress --alsologtostderr -v=1: (28.690892152s) === CONT TestAddons/parallel/GCPAuth addons_test.go:710: (dbg) Done: out/minikube-linux-amd64 -p addons-20210507215008-391940 addons disable gcp-auth --alsologtostderr -v=1: (26.971779437s) === CONT TestAddons/parallel/CSI helpers_test.go:335: "task-pv-pod-restore" [6ac2d1d2-91d3-4c59-a041-8fbf65c96c76] Running addons_test.go:600: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 23.006012517s addons_test.go:605: (dbg) Run: kubectl --context addons-20210507215008-391940 delete pod task-pv-pod-restore addons_test.go:605: (dbg) Done: kubectl --context addons-20210507215008-391940 delete pod task-pv-pod-restore: (6.505500128s) addons_test.go:609: (dbg) Run: kubectl --context addons-20210507215008-391940 delete pvc hpvc-restore addons_test.go:613: (dbg) Run: kubectl --context addons-20210507215008-391940 delete volumesnapshot new-snapshot-demo addons_test.go:617: (dbg) Run: out/minikube-linux-amd64 -p addons-20210507215008-391940 addons disable csi-hostpath-driver --alsologtostderr -v=1 addons_test.go:617: (dbg) Done: out/minikube-linux-amd64 -p addons-20210507215008-391940 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.725764207s) addons_test.go:621: (dbg) Run: out/minikube-linux-amd64 -p addons-20210507215008-391940 addons disable volumesnapshots --alsologtostderr -v=1 === CONT TestAddons addons_test.go:129: (dbg) Run: out/minikube-linux-amd64 stop -p addons-20210507215008-391940 addons_test.go:129: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20210507215008-391940: (20.716716896s) addons_test.go:133: (dbg) Run: out/minikube-linux-amd64 addons enable dashboard -p addons-20210507215008-391940 addons_test.go:137: (dbg) Run: out/minikube-linux-amd64 addons disable dashboard -p addons-20210507215008-391940 helpers_test.go:171: Cleaning up "addons-20210507215008-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p addons-20210507215008-391940 helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p addons-20210507215008-391940: (2.963273543s) --- PASS: TestAddons (286.96s) --- PASS: TestAddons/parallel (0.00s) --- SKIP: TestAddons/parallel/Olm (0.00s) --- PASS: TestAddons/parallel/MetricsServer (5.70s) --- PASS: TestAddons/parallel/HelmTiller (9.39s) --- PASS: TestAddons/parallel/Registry (17.29s) --- PASS: TestAddons/parallel/Ingress (40.37s) --- PASS: TestAddons/parallel/GCPAuth (48.90s) --- PASS: TestAddons/parallel/CSI (64.09s) === RUN TestCertOptions === PAUSE TestCertOptions === RUN TestDockerFlags docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd --- SKIP: TestDockerFlags (0.00s) === RUN TestForceSystemdFlag === PAUSE TestForceSystemdFlag === RUN TestForceSystemdEnv === PAUSE TestForceSystemdEnv === RUN TestKVMDriverInstallOrUpdate === PAUSE TestKVMDriverInstallOrUpdate === RUN TestHyperKitDriverInstallOrUpdate driver_install_or_update_test.go:116: Skip if not darwin. --- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s) === RUN TestHyperkitDriverSkipUpgrade driver_install_or_update_test.go:189: Skip if not darwin. --- SKIP: TestHyperkitDriverSkipUpgrade (0.00s) === RUN TestErrorSpam error_spam_test.go:77: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210507215455-391940 --driver=docker --container-runtime=containerd error_spam_test.go:77: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210507215455-391940 --driver=docker --container-runtime=containerd: (42.598886907s) error_spam_test.go:87: acceptable stderr: "! Your cgroup does not allow setting memory." === RUN TestErrorSpam/start error_spam_test.go:208: Cleaning up 1 logfile(s) ... error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run === RUN TestErrorSpam/status error_spam_test.go:208: Cleaning up 0 logfile(s) ... error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 === RUN TestErrorSpam/pause error_spam_test.go:208: Cleaning up 0 logfile(s) ... error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 pause -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 pause -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 pause -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 pause -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 pause -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 === RUN TestErrorSpam/unpause error_spam_test.go:208: Cleaning up 0 logfile(s) ... error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 unpause -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 === RUN TestErrorSpam/stop error_spam_test.go:208: Cleaning up 0 logfile(s) ... error_spam_test.go:166: (dbg) Run: out/minikube-linux-amd64 stop -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 error_spam_test.go:166: (dbg) Done: out/minikube-linux-amd64 stop -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940: (20.70894383s) === CONT TestErrorSpam helpers_test.go:171: Cleaning up "nospam-20210507215455-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p nospam-20210507215455-391940 helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p nospam-20210507215455-391940: (2.099509162s) --- PASS: TestErrorSpam (152.91s) --- PASS: TestErrorSpam/start (54.18s) --- PASS: TestErrorSpam/status (30.62s) --- PASS: TestErrorSpam/pause (2.18s) --- PASS: TestErrorSpam/unpause (0.53s) --- PASS: TestErrorSpam/stop (20.71s) === RUN TestFunctional === RUN TestFunctional/serial === RUN TestFunctional/serial/CopySyncFile functional_test.go:1546: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/files/etc/test/nested/copy/391940/hosts === RUN TestFunctional/serial/StartWithProxy functional_test.go:541: (dbg) Run: out/minikube-linux-amd64 start -p functional-20210507215728-391940 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd E0507 21:58:17.776896 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory E0507 21:58:17.782547 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory E0507 21:58:17.792773 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory E0507 21:58:17.812966 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory E0507 21:58:17.853184 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory E0507 21:58:17.933426 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory E0507 21:58:18.093787 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory E0507 21:58:18.414349 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory E0507 21:58:19.055248 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory E0507 21:58:20.335526 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory E0507 21:58:22.896134 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory E0507 21:58:28.016615 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory E0507 21:58:38.257204 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory E0507 21:58:58.738134 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory E0507 21:59:39.699085 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory functional_test.go:541: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210507215728-391940 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd: (2m14.513229618s) === RUN TestFunctional/serial/AuditLog === RUN TestFunctional/serial/SoftStart functional_test.go:585: (dbg) Run: out/minikube-linux-amd64 start -p functional-20210507215728-391940 --alsologtostderr -v=8 functional_test.go:585: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210507215728-391940 --alsologtostderr -v=8: (15.299207302s) functional_test.go:589: soft start took 15.299847795s for "functional-20210507215728-391940" cluster. === RUN TestFunctional/serial/KubeContext functional_test.go:605: (dbg) Run: kubectl config current-context === RUN TestFunctional/serial/KubectlGetPods functional_test.go:618: (dbg) Run: kubectl --context functional-20210507215728-391940 get po -A === RUN TestFunctional/serial/CacheCmd === RUN TestFunctional/serial/CacheCmd/cache === RUN TestFunctional/serial/CacheCmd/cache/add_remote functional_test.go:910: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 cache add k8s.gcr.io/pause:3.1 functional_test.go:910: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 cache add k8s.gcr.io/pause:3.3 functional_test.go:910: (dbg) Done: out/minikube-linux-amd64 -p functional-20210507215728-391940 cache add k8s.gcr.io/pause:3.3: (1.278513679s) functional_test.go:910: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 cache add k8s.gcr.io/pause:latest functional_test.go:910: (dbg) Done: out/minikube-linux-amd64 -p functional-20210507215728-391940 cache add k8s.gcr.io/pause:latest: (1.15376468s) === RUN TestFunctional/serial/CacheCmd/cache/add_local functional_test.go:940: (dbg) Run: docker build -t minikube-local-cache-test:functional-20210507215728-391940 /tmp/functional-20210507215728-391940244790430 functional_test.go:945: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 cache add minikube-local-cache-test:functional-20210507215728-391940 === RUN TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 functional_test.go:952: (dbg) Run: out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3 === RUN TestFunctional/serial/CacheCmd/cache/list functional_test.go:959: (dbg) Run: out/minikube-linux-amd64 cache list === RUN TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node functional_test.go:972: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh sudo crictl images === RUN TestFunctional/serial/CacheCmd/cache/cache_reload functional_test.go:994: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh sudo crictl rmi k8s.gcr.io/pause:latest functional_test.go:1000: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh sudo crictl inspecti k8s.gcr.io/pause:latest functional_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (280.796465ms) -- stdout -- FATA[0000] no such image "k8s.gcr.io/pause:latest" present -- /stdout -- ** stderr ** ssh: Process exited with status 1 ** /stderr ** functional_test.go:1005: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 cache reload functional_test.go:1005: (dbg) Done: out/minikube-linux-amd64 -p functional-20210507215728-391940 cache reload: (1.150629636s) functional_test.go:1010: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh sudo crictl inspecti k8s.gcr.io/pause:latest === RUN TestFunctional/serial/CacheCmd/cache/delete functional_test.go:1019: (dbg) Run: out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1 functional_test.go:1019: (dbg) Run: out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest === RUN TestFunctional/serial/MinikubeKubectlCmd functional_test.go:636: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 kubectl -- --context functional-20210507215728-391940 get pods === RUN TestFunctional/serial/MinikubeKubectlCmdDirectly functional_test.go:655: (dbg) Run: out/kubectl --context functional-20210507215728-391940 get pods === RUN TestFunctional/serial/ExtraConfig functional_test.go:669: (dbg) Run: out/minikube-linux-amd64 start -p functional-20210507215728-391940 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all E0507 22:01:01.620129 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory functional_test.go:669: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210507215728-391940 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m54.074744132s) functional_test.go:673: restart took 1m54.074873214s for "functional-20210507215728-391940" cluster. === RUN TestFunctional/serial/ComponentHealth functional_test.go:720: (dbg) Run: kubectl --context functional-20210507215728-391940 get po -l tier=control-plane -n kube-system -o=json functional_test.go:734: etcd phase: Running functional_test.go:744: etcd status: Ready functional_test.go:734: kube-apiserver phase: Running functional_test.go:744: kube-apiserver status: Ready functional_test.go:734: kube-controller-manager phase: Running functional_test.go:744: kube-controller-manager status: Ready functional_test.go:734: kube-scheduler phase: Running functional_test.go:744: kube-scheduler status: Ready === RUN TestFunctional/parallel === RUN TestFunctional/parallel/ConfigCmd === PAUSE TestFunctional/parallel/ConfigCmd === RUN TestFunctional/parallel/DashboardCmd === PAUSE TestFunctional/parallel/DashboardCmd === RUN TestFunctional/parallel/DryRun === PAUSE TestFunctional/parallel/DryRun === RUN TestFunctional/parallel/StatusCmd === PAUSE TestFunctional/parallel/StatusCmd === RUN TestFunctional/parallel/LogsCmd === PAUSE TestFunctional/parallel/LogsCmd === RUN TestFunctional/parallel/LogsFileCmd === PAUSE TestFunctional/parallel/LogsFileCmd === RUN TestFunctional/parallel/MountCmd === PAUSE TestFunctional/parallel/MountCmd === RUN TestFunctional/parallel/ProfileCmd === PAUSE TestFunctional/parallel/ProfileCmd === RUN TestFunctional/parallel/ServiceCmd === PAUSE TestFunctional/parallel/ServiceCmd === RUN TestFunctional/parallel/AddonsCmd === PAUSE TestFunctional/parallel/AddonsCmd === RUN TestFunctional/parallel/PersistentVolumeClaim === PAUSE TestFunctional/parallel/PersistentVolumeClaim === RUN TestFunctional/parallel/TunnelCmd === PAUSE TestFunctional/parallel/TunnelCmd === RUN TestFunctional/parallel/SSHCmd === PAUSE TestFunctional/parallel/SSHCmd === RUN TestFunctional/parallel/CpCmd === PAUSE TestFunctional/parallel/CpCmd === RUN TestFunctional/parallel/MySQL === PAUSE TestFunctional/parallel/MySQL === RUN TestFunctional/parallel/FileSync === PAUSE TestFunctional/parallel/FileSync === RUN TestFunctional/parallel/CertSync === PAUSE TestFunctional/parallel/CertSync === RUN TestFunctional/parallel/UpdateContextCmd === PAUSE TestFunctional/parallel/UpdateContextCmd === RUN TestFunctional/parallel/DockerEnv === PAUSE TestFunctional/parallel/DockerEnv === RUN TestFunctional/parallel/PodmanEnv === PAUSE TestFunctional/parallel/PodmanEnv === RUN TestFunctional/parallel/NodeLabels === PAUSE TestFunctional/parallel/NodeLabels === RUN TestFunctional/parallel/LoadImage === PAUSE TestFunctional/parallel/LoadImage === RUN TestFunctional/parallel/RemoveImage === PAUSE TestFunctional/parallel/RemoveImage === RUN TestFunctional/parallel/BuildImage === PAUSE TestFunctional/parallel/BuildImage === RUN TestFunctional/parallel/ListImages === PAUSE TestFunctional/parallel/ListImages === CONT TestFunctional/parallel/ConfigCmd === CONT TestFunctional/parallel/CertSync === CONT TestFunctional/parallel/LoadImage === CONT TestFunctional/parallel/ServiceCmd === CONT TestFunctional/parallel/ConfigCmd functional_test.go:1045: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 config unset cpus === CONT TestFunctional/parallel/CertSync functional_test.go:1635: Checking for existence of /etc/ssl/certs/391940.pem within VM === CONT TestFunctional/parallel/LoadImage functional_test.go:220: (dbg) Run: docker pull busybox:latest === CONT TestFunctional/parallel/CertSync functional_test.go:1636: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh "sudo cat /etc/ssl/certs/391940.pem" === CONT TestFunctional/parallel/ServiceCmd functional_test.go:1273: (dbg) Run: kubectl --context functional-20210507215728-391940 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8 functional_test.go:1279: (dbg) Run: kubectl --context functional-20210507215728-391940 expose deployment hello-node --type=NodePort --port=8080 === CONT TestFunctional/parallel/ConfigCmd functional_test.go:1045: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 config get cpus functional_test.go:1045: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210507215728-391940 config get cpus: exit status 14 (83.519459ms) ** stderr ** Error: specified key could not be found in config ** /stderr ** functional_test.go:1045: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 config set cpus 2 === CONT TestFunctional/parallel/ServiceCmd functional_test.go:1284: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ... helpers_test.go:335: "hello-node-6cbfcd7cbc-5ltsz" [c67a1e00-05b0-4f49-acb2-68c0888df8cf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver]) === CONT TestFunctional/parallel/ConfigCmd functional_test.go:1045: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 config get cpus functional_test.go:1045: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 config unset cpus === CONT TestFunctional/parallel/CertSync functional_test.go:1635: Checking for existence of /usr/share/ca-certificates/391940.pem within VM functional_test.go:1636: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh "sudo cat /usr/share/ca-certificates/391940.pem" === CONT TestFunctional/parallel/ConfigCmd functional_test.go:1045: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 config get cpus functional_test.go:1045: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210507215728-391940 config get cpus: exit status 14 (66.100682ms) ** stderr ** Error: specified key could not be found in config ** /stderr ** === CONT TestFunctional/parallel/NodeLabels functional_test.go:197: (dbg) Run: kubectl --context functional-20210507215728-391940 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'" === CONT TestFunctional/parallel/LoadImage functional_test.go:227: (dbg) Run: docker tag busybox:latest docker.io/library/busybox:load-functional-20210507215728-391940 functional_test.go:233: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 image load docker.io/library/busybox:load-functional-20210507215728-391940 === CONT TestFunctional/parallel/PodmanEnv functional_test.go:471: only validate podman env with docker container runtime, currently testing containerd === CONT TestFunctional/parallel/DockerEnv functional_test.go:411: only validate docker env with docker container runtime, currently testing containerd === CONT TestFunctional/parallel/UpdateContextCmd === RUN TestFunctional/parallel/UpdateContextCmd/no_changes === PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes === RUN TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster === PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster === RUN TestFunctional/parallel/UpdateContextCmd/no_clusters === PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters === CONT TestFunctional/parallel/LogsCmd functional_test.go:1081: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 logs === CONT TestFunctional/parallel/CertSync functional_test.go:1635: Checking for existence of /etc/ssl/certs/51391683.0 within VM functional_test.go:1636: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh "sudo cat /etc/ssl/certs/51391683.0" === CONT TestFunctional/parallel/ProfileCmd === RUN TestFunctional/parallel/ProfileCmd/profile_not_create functional_test.go:1118: (dbg) Run: out/minikube-linux-amd64 profile lis functional_test.go:1122: (dbg) Run: out/minikube-linux-amd64 profile list --output json === RUN TestFunctional/parallel/ProfileCmd/profile_list functional_test.go:1156: (dbg) Run: out/minikube-linux-amd64 profile list === CONT TestFunctional/parallel/LoadImage functional_test.go:233: (dbg) Done: out/minikube-linux-amd64 -p functional-20210507215728-391940 image load docker.io/library/busybox:load-functional-20210507215728-391940: (1.325960596s) functional_test.go:303: (dbg) Run: out/minikube-linux-amd64 ssh -p functional-20210507215728-391940 -- sudo crictl inspecti docker.io/library/busybox:load-functional-20210507215728-391940 === CONT TestFunctional/parallel/ProfileCmd/profile_list functional_test.go:1161: Took "501.912091ms" to run "out/minikube-linux-amd64 profile list" functional_test.go:1170: (dbg) Run: out/minikube-linux-amd64 profile list -l functional_test.go:1175: Took "71.840413ms" to run "out/minikube-linux-amd64 profile list -l" === RUN TestFunctional/parallel/ProfileCmd/profile_json_output functional_test.go:1206: (dbg) Run: out/minikube-linux-amd64 profile list -o json === CONT TestFunctional/parallel/MountCmd functional_test_mount_test.go:77: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210507215728-391940 /tmp/mounttest300063589:/mount-9p --alsologtostderr -v=1] functional_test_mount_test.go:111: wrote "test-1620424921468213970" to /tmp/mounttest300063589/created-by-test functional_test_mount_test.go:111: wrote "test-1620424921468213970" to /tmp/mounttest300063589/created-by-test-removed-by-pod functional_test_mount_test.go:111: wrote "test-1620424921468213970" to /tmp/mounttest300063589/test-1620424921468213970 functional_test_mount_test.go:119: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh "findmnt -T /mount-9p | grep 9p" === CONT TestFunctional/parallel/ProfileCmd/profile_json_output functional_test.go:1211: Took "368.393444ms" to run "out/minikube-linux-amd64 profile list -o json" functional_test.go:1219: (dbg) Run: out/minikube-linux-amd64 profile list -o json --light functional_test.go:1224: Took "79.031074ms" to run "out/minikube-linux-amd64 profile list -o json --light" === CONT TestFunctional/parallel/LogsFileCmd functional_test.go:1097: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 logs --file /tmp/functional-20210507215728-391940612532608/logs.txt === CONT TestFunctional/parallel/LogsCmd functional_test.go:1081: (dbg) Done: out/minikube-linux-amd64 -p functional-20210507215728-391940 logs: (2.022718064s) === CONT TestFunctional/parallel/BuildImage functional_test.go:369: (dbg) Run: out/minikube-linux-amd64 ssh -p functional-20210507215728-391940 -- nohup sudo -b buildkitd --oci-worker=false --containerd-worker=true --containerd-worker-namespace=k8s.io === CONT TestFunctional/parallel/MountCmd functional_test_mount_test.go:119: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (372.633689ms) ** stderr ** ssh: Process exited with status 1 ** /stderr ** === CONT TestFunctional/parallel/BuildImage functional_test.go:341: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 image build -t localhost/my-image:functional-20210507215728-391940 testdata/build === CONT TestFunctional/parallel/MountCmd functional_test_mount_test.go:119: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh "findmnt -T /mount-9p | grep 9p" functional_test_mount_test.go:133: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh -- ls -la /mount-9p functional_test_mount_test.go:137: guest mount directory contents total 2 -rw-r--r-- 1 docker docker 24 May 7 22:02 created-by-test -rw-r--r-- 1 docker docker 24 May 7 22:02 created-by-test-removed-by-pod -rw-r--r-- 1 docker docker 24 May 7 22:02 test-1620424921468213970 functional_test_mount_test.go:141: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh cat /mount-9p/test-1620424921468213970 functional_test_mount_test.go:152: (dbg) Run: kubectl --context functional-20210507215728-391940 replace --force -f testdata/busybox-mount-test.yaml === CONT TestFunctional/parallel/LogsFileCmd functional_test.go:1097: (dbg) Done: out/minikube-linux-amd64 -p functional-20210507215728-391940 logs --file /tmp/functional-20210507215728-391940612532608/logs.txt: (2.019413839s) === CONT TestFunctional/parallel/ListImages functional_test.go:385: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 image ls === CONT TestFunctional/parallel/MountCmd functional_test_mount_test.go:157: (dbg) TestFunctional/parallel/MountCmd: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ... helpers_test.go:335: "busybox-mount" [12d9fac7-0cf8-42ff-b7e0-88ab86029f5a] Pending === CONT TestFunctional/parallel/ListImages functional_test.go:390: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210507215728-391940 image ls: k8s.gcr.io/pause:latest k8s.gcr.io/pause:3.3 k8s.gcr.io/pause:3.2 k8s.gcr.io/pause:3.1 k8s.gcr.io/kube-scheduler:v1.20.2 k8s.gcr.io/kube-proxy:v1.20.2 k8s.gcr.io/kube-controller-manager:v1.20.2 k8s.gcr.io/kube-apiserver:v1.20.2 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/echoserver:1.8 k8s.gcr.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/library/minikube-local-cache-test:functional-20210507215728-391940 docker.io/library/busybox:load-functional-20210507215728-391940 docker.io/kubernetesui/metrics-scraper:v1.0.4 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kindest/kindnetd:v20210326-1e038dc5 docker.io/kindest/kindnetd:v20210220-5b7e6d01 === CONT TestFunctional/parallel/DryRun functional_test.go:873: (dbg) Run: out/minikube-linux-amd64 start -p functional-20210507215728-391940 --dry-run --memory 250MB --alsologtostderr --driver=docker --container-runtime=containerd === CONT TestFunctional/parallel/BuildImage functional_test.go:341: (dbg) Done: out/minikube-linux-amd64 -p functional-20210507215728-391940 image build -t localhost/my-image:functional-20210507215728-391940 testdata/build: (2.084053516s) functional_test.go:349: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20210507215728-391940 image build -t localhost/my-image:functional-20210507215728-391940 testdata/build: #1 [internal] load build definition from Dockerfile #1 sha256:9eb594552e5d025d2bce286b65fad5f2a09934930926162e934caa35114bf81a #1 transferring dockerfile: 77B done #1 DONE 0.1s #2 [internal] load .dockerignore #2 sha256:2ec921ce38b301bbd169ccce5271cb6779e3b68b705446698dd15ac220641ad9 #2 transferring context: 2B done #2 DONE 0.0s #3 [internal] load metadata for docker.io/library/busybox:latest #3 sha256:da853382a7535e068feae4d80bdd0ad2567df3d5cd484fd68f919294d091b053 #3 DONE 0.6s #6 [internal] load build context #6 sha256:be7193bea878d089af2dcd85aea770fc071ecf10080ebfae861e70772d3f96a9 #6 transferring context: 62B done #6 DONE 0.0s #4 [1/3] FROM docker.io/library/busybox@sha256:be4684e4004560b2cd1f12148b7120b0ea69c385bcc9b12a637537a2c60f97fb #4 sha256:bf15a20fbfe1748e363d0c6c77a4959ff2d29933fd76edc4d49b2f00250e7594 #4 resolve docker.io/library/busybox@sha256:be4684e4004560b2cd1f12148b7120b0ea69c385bcc9b12a637537a2c60f97fb 0.0s done #4 DONE 0.1s #5 [2/3] RUN true #5 sha256:636ef616c628288aead6af6c1eeab0d5b4c4ede932d18b469f3e24de50721e15 #5 DONE 0.4s #7 [3/3] ADD content.txt / #7 sha256:e3b3e60f4646a43120a1a5d7216d85e18175ec70f8ca036d289b76eebe70ede2 #7 DONE 0.1s #8 exporting to image #8 sha256:e8c613e07b0b7ff33893b694f7759a10d42e180f2b4dc349fb57dc6b71dcab00 #8 exporting layers #8 exporting layers 0.1s done #8 exporting manifest sha256:3ff7e73afa650cb47d12a386d892cd945a93ca0e440a7fa178a1308f0d75c3c5 0.0s done #8 exporting config sha256:70047817b214f9de3a783d7b5cfc19303ad22cd5213f3314e358ec0689be9012 done #8 naming to localhost/my-image:functional-20210507215728-391940 done #8 DONE 0.1s functional_test.go:303: (dbg) Run: out/minikube-linux-amd64 ssh -p functional-20210507215728-391940 -- sudo crictl inspecti localhost/my-image:functional-20210507215728-391940 === CONT TestFunctional/parallel/DryRun functional_test.go:873: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210507215728-391940 --dry-run --memory 250MB --alsologtostderr --driver=docker --container-runtime=containerd: exit status 23 (270.163765ms) -- stdout -- * [functional-20210507215728-391940] minikube v1.20.0 on Debian 9.13 (kvm/amd64) - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig - MINIKUBE_BIN=out/minikube-linux-amd64 - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube - MINIKUBE_LOCATION=master * Using the docker driver based on existing profile - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities -- /stdout -- ** stderr ** I0507 22:02:04.111923 451424 out.go:291] Setting OutFile to fd 1 ... I0507 22:02:04.112094 451424 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:02:04.112105 451424 out.go:304] Setting ErrFile to fd 2... I0507 22:02:04.112110 451424 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:02:04.112216 451424 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/bin I0507 22:02:04.112461 451424 out.go:298] Setting JSON to false I0507 22:02:04.148036 451424 start.go:108] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":9692,"bootTime":1620415232,"procs":236,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"} I0507 22:02:04.148166 451424 start.go:118] virtualization: kvm guest I0507 22:02:04.151155 451424 out.go:170] * [functional-20210507215728-391940] minikube v1.20.0 on Debian 9.13 (kvm/amd64) I0507 22:02:04.152992 451424 out.go:170] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig I0507 22:02:04.154611 451424 out.go:170] - MINIKUBE_BIN=out/minikube-linux-amd64 I0507 22:02:04.155979 451424 out.go:170] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube I0507 22:02:04.157568 451424 out.go:170] - MINIKUBE_LOCATION=master I0507 22:02:04.158329 451424 driver.go:322] Setting default libvirt URI to qemu:///system I0507 22:02:04.207268 451424 docker.go:119] docker version: linux-19.03.15 I0507 22:02:04.207344 451424 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0507 22:02:04.306685 451424 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:55 SystemTime:2021-05-07 22:02:04.247849903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0507 22:02:04.306803 451424 docker.go:225] overlay module found I0507 22:02:04.309816 451424 out.go:170] * Using the docker driver based on existing profile I0507 22:02:04.309861 451424 start.go:276] selected driver: docker I0507 22:02:04.309869 451424 start.go:718] validating driver "docker" against &{Name:functional-20210507215728-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:functional-20210507215728-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8441 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0507 22:02:04.310039 451424 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} W0507 22:02:04.310089 451424 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0507 22:02:04.310104 451424 out.go:424] no arguments passed for "! Your cgroup does not allow setting memory.\n" - returning raw string W0507 22:02:04.310126 451424 out.go:235] ! Your cgroup does not allow setting memory. ! Your cgroup does not allow setting memory. W0507 22:02:04.310139 451424 out.go:424] no arguments passed for " - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities\n" - returning raw string I0507 22:02:04.311661 451424 out.go:170] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0507 22:02:04.313954 451424 out.go:170] W0507 22:02:04.314122 451424 out.go:235] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB I0507 22:02:04.315560 451424 out.go:170] ** /stderr ** functional_test.go:888: (dbg) Run: out/minikube-linux-amd64 start -p functional-20210507215728-391940 --dry-run --alsologtostderr -v=1 --driver=docker --container-runtime=containerd === CONT TestFunctional/parallel/ServiceCmd helpers_test.go:335: "hello-node-6cbfcd7cbc-5ltsz" [c67a1e00-05b0-4f49-acb2-68c0888df8cf] Running === CONT TestFunctional/parallel/StatusCmd functional_test.go:763: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 status === CONT TestFunctional/parallel/SSHCmd functional_test.go:1414: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh "echo hello" === CONT TestFunctional/parallel/MountCmd helpers_test.go:335: "busybox-mount" [12d9fac7-0cf8-42ff-b7e0-88ab86029f5a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger]) === CONT TestFunctional/parallel/StatusCmd functional_test.go:769: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}} === CONT TestFunctional/parallel/SSHCmd functional_test.go:1431: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh "cat /etc/hostname" === CONT TestFunctional/parallel/StatusCmd functional_test.go:780: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 status -o json === CONT TestFunctional/parallel/FileSync functional_test.go:1594: Checking for existence of /etc/test/nested/copy/391940/hosts within VM functional_test.go:1595: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh "sudo cat /etc/test/nested/copy/391940/hosts" functional_test.go:1600: file sync test content: Test file for checking file sync process === CONT TestFunctional/parallel/MySQL functional_test.go:1498: (dbg) Run: kubectl --context functional-20210507215728-391940 replace --force -f testdata/mysql.yaml === CONT TestFunctional/parallel/CpCmd functional_test.go:1464: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 cp testdata/cp-test.txt /home/docker/cp-test.txt === CONT TestFunctional/parallel/MySQL functional_test.go:1503: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ... helpers_test.go:335: "mysql-9bbbc5bbb-gdm8w" [c84d6827-e88b-4ff3-aa25-9a597a30c022] Pending === CONT TestFunctional/parallel/CpCmd functional_test.go:1472: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh "sudo cat /home/docker/cp-test.txt" === CONT TestFunctional/parallel/DashboardCmd functional_test.go:811: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url -p functional-20210507215728-391940 --alsologtostderr -v=1] === CONT TestFunctional/parallel/MySQL helpers_test.go:335: "mysql-9bbbc5bbb-gdm8w" [c84d6827-e88b-4ff3-aa25-9a597a30c022] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql]) === CONT TestFunctional/parallel/MountCmd helpers_test.go:335: "busybox-mount" [12d9fac7-0cf8-42ff-b7e0-88ab86029f5a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted functional_test_mount_test.go:157: (dbg) TestFunctional/parallel/MountCmd: integration-test=busybox-mount healthy within 3.066271303s functional_test_mount_test.go:173: (dbg) Run: kubectl --context functional-20210507215728-391940 logs busybox-mount functional_test_mount_test.go:185: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh stat /mount-9p/created-by-test functional_test_mount_test.go:185: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh stat /mount-9p/created-by-pod functional_test_mount_test.go:94: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh "sudo umount -f /mount-9p" functional_test_mount_test.go:98: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210507215728-391940 /tmp/mounttest300063589:/mount-9p --alsologtostderr -v=1] ... === CONT TestFunctional/parallel/PersistentVolumeClaim functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ... helpers_test.go:335: "storage-provisioner" [e122fa6e-319e-4a3c-a844-8a52874079c3] Running === CONT TestFunctional/parallel/ServiceCmd functional_test.go:1284: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 10.021748113s functional_test.go:1288: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 service list functional_test.go:1288: (dbg) Done: out/minikube-linux-amd64 -p functional-20210507215728-391940 service list: (1.003927824s) functional_test.go:1301: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 service --namespace=default --https --url hello-node functional_test.go:1310: found endpoint: https://192.168.58.2:30383 functional_test.go:1321: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 service hello-node --url --format={{.IP}} 2021/05/07 22:02:11 [DEBUG] GET http://127.0.0.1:33357/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ === CONT TestFunctional/parallel/DashboardCmd functional_test.go:816: (dbg) stopping [out/minikube-linux-amd64 dashboard --url -p functional-20210507215728-391940 --alsologtostderr -v=1] ... helpers_test.go:499: unable to kill pid 452623: os: process already finished === CONT TestFunctional/parallel/ServiceCmd functional_test.go:1330: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 service hello-node --url === CONT TestFunctional/parallel/TunnelCmd === RUN TestFunctional/parallel/TunnelCmd/serial === RUN TestFunctional/parallel/TunnelCmd/serial/StartTunnel functional_test_tunnel_test.go:126: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20210507215728-391940 tunnel --alsologtostderr] === RUN TestFunctional/parallel/TunnelCmd/serial/WaitService functional_test_tunnel_test.go:146: (dbg) Run: kubectl --context functional-20210507215728-391940 apply -f testdata/testsvc.yaml functional_test_tunnel_test.go:150: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ... helpers_test.go:335: "nginx-svc" [87eb5fc1-16a5-4e5c-b2c5-f25c8c29da0a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx]) === CONT TestFunctional/parallel/ServiceCmd functional_test.go:1336: found endpoint for hello-node: http://192.168.58.2:30383 functional_test.go:1347: Attempting to fetch http://192.168.58.2:30383 ... functional_test.go:1366: http://192.168.58.2:30383: success! body: Hostname: hello-node-6cbfcd7cbc-5ltsz Pod Information: -no pod information available- Server values: server_version=nginx: 1.13.3 - lua: 10008 Request Information: client_address=10.244.0.1 method=GET real path=/ query= request_version=1.1 request_uri=http://192.168.58.2:8080/ Request Headers: accept-encoding=gzip host=192.168.58.2:30383 user-agent=Go-http-client/1.1 Request Body: -no body in request- === CONT TestFunctional/parallel/AddonsCmd functional_test.go:1381: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 addons list functional_test.go:1392: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 addons list -o json === CONT TestFunctional/parallel/RemoveImage functional_test.go:261: (dbg) Run: docker pull busybox:latest functional_test.go:268: (dbg) Run: docker tag busybox:latest docker.io/library/busybox:remove-functional-20210507215728-391940 functional_test.go:274: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 image load docker.io/library/busybox:remove-functional-20210507215728-391940 === CONT TestFunctional/parallel/PersistentVolumeClaim functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009267528s functional_test_pvc_test.go:49: (dbg) Run: kubectl --context functional-20210507215728-391940 get storageclass -o=json functional_test_pvc_test.go:69: (dbg) Run: kubectl --context functional-20210507215728-391940 apply -f testdata/storage-provisioner/pvc.yaml === CONT TestFunctional/parallel/RemoveImage functional_test.go:274: (dbg) Done: out/minikube-linux-amd64 -p functional-20210507215728-391940 image load docker.io/library/busybox:remove-functional-20210507215728-391940: (1.107691861s) functional_test.go:280: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 image rm docker.io/library/busybox:remove-functional-20210507215728-391940 === CONT TestFunctional/parallel/PersistentVolumeClaim functional_test_pvc_test.go:76: (dbg) Run: kubectl --context functional-20210507215728-391940 get pvc myclaim -o=json functional_test_pvc_test.go:125: (dbg) Run: kubectl --context functional-20210507215728-391940 apply -f testdata/storage-provisioner/pod.yaml functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ... helpers_test.go:335: "sp-pod" [f60e60b9-4a44-491a-a388-c4c14b2f05bc] Pending === CONT TestFunctional/parallel/RemoveImage functional_test.go:317: (dbg) Run: out/minikube-linux-amd64 ssh -p functional-20210507215728-391940 -- sudo crictl images === CONT TestFunctional/parallel/UpdateContextCmd/no_changes functional_test.go:1729: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 update-context --alsologtostderr -v=2 === CONT TestFunctional/parallel/UpdateContextCmd/no_clusters functional_test.go:1729: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 update-context --alsologtostderr -v=2 === CONT TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster functional_test.go:1729: (dbg) Run: out/minikube-linux-amd64 -p functional-20210507215728-391940 update-context --alsologtostderr -v=2 === CONT TestFunctional/parallel/MySQL helpers_test.go:335: "mysql-9bbbc5bbb-gdm8w" [c84d6827-e88b-4ff3-aa25-9a597a30c022] Running === CONT TestFunctional/parallel/PersistentVolumeClaim helpers_test.go:335: "sp-pod" [f60e60b9-4a44-491a-a388-c4c14b2f05bc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend]) === CONT TestFunctional/parallel/TunnelCmd/serial/WaitService helpers_test.go:335: "nginx-svc" [87eb5fc1-16a5-4e5c-b2c5-f25c8c29da0a] Running === CONT TestFunctional/parallel/PersistentVolumeClaim helpers_test.go:335: "sp-pod" [f60e60b9-4a44-491a-a388-c4c14b2f05bc] Running === CONT TestFunctional/parallel/TunnelCmd/serial/WaitService functional_test_tunnel_test.go:150: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService: run=nginx-svc healthy within 9.007979017s === RUN TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP functional_test_tunnel_test.go:164: (dbg) Run: kubectl --context functional-20210507215728-391940 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip} === RUN TestFunctional/parallel/TunnelCmd/serial/AccessDirect functional_test_tunnel_test.go:229: tunnel at http://10.102.97.53 is working! === RUN TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding === RUN TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding === RUN TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding === RUN TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel functional_test_tunnel_test.go:364: (dbg) stopping [out/minikube-linux-amd64 -p functional-20210507215728-391940 tunnel --alsologtostderr] ... === CONT TestFunctional/parallel/MySQL functional_test.go:1503: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.0284752s functional_test.go:1510: (dbg) Run: kubectl --context functional-20210507215728-391940 exec mysql-9bbbc5bbb-gdm8w -- mysql -ppassword -e "show databases;" functional_test.go:1510: (dbg) Non-zero exit: kubectl --context functional-20210507215728-391940 exec mysql-9bbbc5bbb-gdm8w -- mysql -ppassword -e "show databases;": exit status 1 (201.909691ms) ** stderr ** mysql: [Warning] Using a password on the command line interface can be insecure. ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) command terminated with exit code 1 ** /stderr ** functional_test.go:1510: (dbg) Run: kubectl --context functional-20210507215728-391940 exec mysql-9bbbc5bbb-gdm8w -- mysql -ppassword -e "show databases;" functional_test.go:1510: (dbg) Non-zero exit: kubectl --context functional-20210507215728-391940 exec mysql-9bbbc5bbb-gdm8w -- mysql -ppassword -e "show databases;": exit status 1 (132.608958ms) ** stderr ** mysql: [Warning] Using a password on the command line interface can be insecure. ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) command terminated with exit code 1 ** /stderr ** === CONT TestFunctional/parallel/PersistentVolumeClaim functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.00636817s functional_test_pvc_test.go:100: (dbg) Run: kubectl --context functional-20210507215728-391940 exec sp-pod -- touch /tmp/mount/foo functional_test_pvc_test.go:106: (dbg) Run: kubectl --context functional-20210507215728-391940 delete -f testdata/storage-provisioner/pod.yaml === CONT TestFunctional/parallel/MySQL functional_test.go:1510: (dbg) Run: kubectl --context functional-20210507215728-391940 exec mysql-9bbbc5bbb-gdm8w -- mysql -ppassword -e "show databases;" === CONT TestFunctional/parallel/PersistentVolumeClaim functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20210507215728-391940 delete -f testdata/storage-provisioner/pod.yaml: (13.216135089s) functional_test_pvc_test.go:125: (dbg) Run: kubectl --context functional-20210507215728-391940 apply -f testdata/storage-provisioner/pod.yaml functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ... helpers_test.go:335: "sp-pod" [6d12ada7-f683-4325-8970-173d8560dd91] Pending helpers_test.go:335: "sp-pod" [6d12ada7-f683-4325-8970-173d8560dd91] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend]) helpers_test.go:335: "sp-pod" [6d12ada7-f683-4325-8970-173d8560dd91] Running functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.513792069s functional_test_pvc_test.go:114: (dbg) Run: kubectl --context functional-20210507215728-391940 exec sp-pod -- ls /tmp/mount === RUN TestFunctional/delete_busybox_image functional_test.go:164: (dbg) Run: docker rmi -f docker.io/library/busybox:load-functional-20210507215728-391940 functional_test.go:169: (dbg) Run: docker rmi -f docker.io/library/busybox:remove-functional-20210507215728-391940 === RUN TestFunctional/delete_my-image_image functional_test.go:176: (dbg) Run: docker rmi -f localhost/my-image:functional-20210507215728-391940 === RUN TestFunctional/delete_minikube_cached_images functional_test.go:184: (dbg) Run: docker rmi -f minikube-local-cache-test:functional-20210507215728-391940 === CONT TestFunctional helpers_test.go:171: Cleaning up "functional-20210507215728-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p functional-20210507215728-391940 helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p functional-20210507215728-391940: (3.061341477s) --- PASS: TestFunctional (319.23s) --- PASS: TestFunctional/serial (271.18s) --- PASS: TestFunctional/serial/CopySyncFile (0.00s) --- PASS: TestFunctional/serial/StartWithProxy (134.51s) --- PASS: TestFunctional/serial/AuditLog (0.00s) --- PASS: TestFunctional/serial/SoftStart (15.30s) --- PASS: TestFunctional/serial/KubeContext (0.04s) --- PASS: TestFunctional/serial/KubectlGetPods (0.20s) --- PASS: TestFunctional/serial/CacheCmd (6.75s) --- PASS: TestFunctional/serial/CacheCmd/cache (6.75s) --- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.13s) --- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s) --- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s) --- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s) --- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s) --- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.99s) --- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s) --- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s) --- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s) --- PASS: TestFunctional/serial/ExtraConfig (114.08s) --- PASS: TestFunctional/serial/ComponentHealth (0.07s) --- PASS: TestFunctional/parallel (0.00s) --- PASS: TestFunctional/parallel/ConfigCmd (0.45s) --- PASS: TestFunctional/parallel/NodeLabels (0.08s) --- SKIP: TestFunctional/parallel/PodmanEnv (0.00s) --- SKIP: TestFunctional/parallel/DockerEnv (0.00s) --- PASS: TestFunctional/parallel/CertSync (0.94s) --- PASS: TestFunctional/parallel/LoadImage (2.25s) --- PASS: TestFunctional/parallel/ProfileCmd (1.60s) --- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s) --- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.57s) --- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s) --- PASS: TestFunctional/parallel/LogsCmd (2.02s) --- PASS: TestFunctional/parallel/LogsFileCmd (2.02s) --- PASS: TestFunctional/parallel/ListImages (0.26s) --- PASS: TestFunctional/parallel/BuildImage (2.77s) --- PASS: TestFunctional/parallel/DryRun (0.61s) --- PASS: TestFunctional/parallel/SSHCmd (0.56s) --- PASS: TestFunctional/parallel/FileSync (0.29s) --- PASS: TestFunctional/parallel/StatusCmd (1.01s) --- PASS: TestFunctional/parallel/CpCmd (0.55s) --- PASS: TestFunctional/parallel/MountCmd (6.88s) --- PASS: TestFunctional/parallel/DashboardCmd (5.28s) --- PASS: TestFunctional/parallel/ServiceCmd (12.62s) --- PASS: TestFunctional/parallel/AddonsCmd (0.19s) --- PASS: TestFunctional/parallel/RemoveImage (2.20s) --- PASS: TestFunctional/parallel/UpdateContextCmd (0.00s) --- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s) --- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s) --- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s) --- PASS: TestFunctional/parallel/TunnelCmd (9.44s) --- PASS: TestFunctional/parallel/TunnelCmd/serial (9.44s) --- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s) --- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService (9.33s) --- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s) --- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s) --- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s) --- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s) --- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s) --- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s) --- PASS: TestFunctional/parallel/MySQL (18.94s) --- PASS: TestFunctional/parallel/PersistentVolumeClaim (35.71s) --- PASS: TestFunctional/delete_busybox_image (0.08s) --- PASS: TestFunctional/delete_my-image_image (0.04s) --- PASS: TestFunctional/delete_minikube_cached_images (0.04s) === RUN TestGvisorAddon gvisor_addon_test.go:34: skipping test because --gvisor=false --- SKIP: TestGvisorAddon (0.00s) === RUN TestJSONOutput === RUN TestJSONOutput/start json_output_test.go:61: (dbg) Run: out/minikube-linux-amd64 start -p json-output-20210507220247-391940 --output=json --user=testUser --memory=2200 --wait=true --driver=docker --container-runtime=containerd E0507 22:03:17.776435 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory E0507 22:03:45.460958 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory json_output_test.go:61: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20210507220247-391940 --output=json --user=testUser --memory=2200 --wait=true --driver=docker --container-runtime=containerd: (2m21.246896115s) === RUN TestJSONOutput/start/Audit === RUN TestJSONOutput/start/parallel === RUN TestJSONOutput/start/parallel/DistinctCurrentSteps === PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps === RUN TestJSONOutput/start/parallel/IncreasingCurrentSteps === PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps === CONT TestJSONOutput/start/parallel/DistinctCurrentSteps === CONT TestJSONOutput/start/parallel/IncreasingCurrentSteps === RUN TestJSONOutput/pause json_output_test.go:61: (dbg) Run: out/minikube-linux-amd64 pause -p json-output-20210507220247-391940 --output=json --user=testUser === RUN TestJSONOutput/pause/Audit === RUN TestJSONOutput/pause/parallel === RUN TestJSONOutput/pause/parallel/DistinctCurrentSteps === PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps === RUN TestJSONOutput/pause/parallel/IncreasingCurrentSteps === PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps === CONT TestJSONOutput/pause/parallel/DistinctCurrentSteps === CONT TestJSONOutput/pause/parallel/IncreasingCurrentSteps === RUN TestJSONOutput/unpause json_output_test.go:61: (dbg) Run: out/minikube-linux-amd64 unpause -p json-output-20210507220247-391940 --output=json --user=testUser === RUN TestJSONOutput/unpause/Audit === RUN TestJSONOutput/unpause/parallel === RUN TestJSONOutput/unpause/parallel/DistinctCurrentSteps === PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps === RUN TestJSONOutput/unpause/parallel/IncreasingCurrentSteps === PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps === CONT TestJSONOutput/unpause/parallel/DistinctCurrentSteps === CONT TestJSONOutput/unpause/parallel/IncreasingCurrentSteps === RUN TestJSONOutput/stop json_output_test.go:61: (dbg) Run: out/minikube-linux-amd64 stop -p json-output-20210507220247-391940 --output=json --user=testUser json_output_test.go:61: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20210507220247-391940 --output=json --user=testUser: (20.765685876s) === RUN TestJSONOutput/stop/Audit === RUN TestJSONOutput/stop/parallel === RUN TestJSONOutput/stop/parallel/DistinctCurrentSteps === PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps === RUN TestJSONOutput/stop/parallel/IncreasingCurrentSteps === PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps === CONT TestJSONOutput/stop/parallel/DistinctCurrentSteps === CONT TestJSONOutput/stop/parallel/IncreasingCurrentSteps === CONT TestJSONOutput helpers_test.go:171: Cleaning up "json-output-20210507220247-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p json-output-20210507220247-391940 helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p json-output-20210507220247-391940: (2.084652353s) --- PASS: TestJSONOutput (165.17s) --- PASS: TestJSONOutput/start (141.25s) --- PASS: TestJSONOutput/start/Audit (0.00s) --- PASS: TestJSONOutput/start/parallel (0.00s) --- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s) --- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s) --- PASS: TestJSONOutput/pause (0.55s) --- PASS: TestJSONOutput/pause/Audit (0.00s) --- PASS: TestJSONOutput/pause/parallel (0.00s) --- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s) --- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s) --- PASS: TestJSONOutput/unpause (0.52s) --- PASS: TestJSONOutput/unpause/Audit (0.00s) --- PASS: TestJSONOutput/unpause/parallel (0.00s) --- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s) --- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s) --- PASS: TestJSONOutput/stop (20.77s) --- PASS: TestJSONOutput/stop/Audit (0.00s) --- PASS: TestJSONOutput/stop/parallel (0.00s) --- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s) --- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s) === RUN TestErrorJSONOutput json_output_test.go:146: (dbg) Run: out/minikube-linux-amd64 start -p json-output-error-20210507220532-391940 --memory=2200 --output=json --wait=true --driver=fail json_output_test.go:146: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20210507220532-391940 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.998874ms) -- stdout -- {"data":{"currentstep":"0","message":"[json-output-error-20210507220532-391940] minikube v1.20.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"ab31f5d7-a561-4530-a939-bea3e6386b6b","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"} {"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig"},"datacontenttype":"application/json","id":"f35e9b35-87c7-43b5-ba6b-29b5f07ffc05","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"} {"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"b2fe58a9-a8ed-4c53-91d3-cb949223e3c8","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"} {"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube"},"datacontenttype":"application/json","id":"c156cef0-2391-437c-84f9-3cfc15ca0c3a","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"} {"data":{"message":"MINIKUBE_LOCATION=master"},"datacontenttype":"application/json","id":"afe5b606-480c-4b22-a30d-c2f4e3dcc054","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"} {"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"ac1f48f5-dfe8-46c0-aae8-92545e5f30a5","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"} -- /stdout -- helpers_test.go:171: Cleaning up "json-output-error-20210507220532-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p json-output-error-20210507220532-391940 --- PASS: TestErrorJSONOutput (0.41s) === RUN TestKicCustomNetwork === RUN TestKicCustomNetwork/create_custom_network kic_custom_network_test.go:57: (dbg) Run: out/minikube-linux-amd64 start -p docker-network-20210507220532-391940 --network= kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20210507220532-391940 --network=: (26.356092993s) kic_custom_network_test.go:101: (dbg) Run: docker network ls --format {{.Name}} helpers_test.go:171: Cleaning up "docker-network-20210507220532-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p docker-network-20210507220532-391940 helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20210507220532-391940: (2.144656019s) === RUN TestKicCustomNetwork/use_default_bridge_network kic_custom_network_test.go:57: (dbg) Run: out/minikube-linux-amd64 start -p docker-network-20210507220601-391940 --network=bridge kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20210507220601-391940 --network=bridge: (22.844981975s) kic_custom_network_test.go:101: (dbg) Run: docker network ls --format {{.Name}} helpers_test.go:171: Cleaning up "docker-network-20210507220601-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p docker-network-20210507220601-391940 helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20210507220601-391940: (2.3482113s) --- PASS: TestKicCustomNetwork (53.77s) --- PASS: TestKicCustomNetwork/create_custom_network (28.54s) --- PASS: TestKicCustomNetwork/use_default_bridge_network (25.23s) === RUN TestKicExistingNetwork kic_custom_network_test.go:101: (dbg) Run: docker network ls --format {{.Name}} kic_custom_network_test.go:93: (dbg) Run: out/minikube-linux-amd64 start -p existing-network-20210507220626-391940 --network=existing-network kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20210507220626-391940 --network=existing-network: (21.832666547s) helpers_test.go:171: Cleaning up "existing-network-20210507220626-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p existing-network-20210507220626-391940 helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20210507220626-391940: (2.510194459s) kic_custom_network_test.go:82: error deleting kic network, may need to delete manually: [unable to delete a network that is attached to a running container] --- PASS: TestKicExistingNetwork (24.66s) === RUN TestMainNoArgs main_test.go:68: (dbg) Run: out/minikube-linux-amd64 --- PASS: TestMainNoArgs (0.06s) === RUN TestMultiNode === RUN TestMultiNode/serial === RUN TestMultiNode/serial/FreshStart2Nodes multinode_test.go:76: (dbg) Run: out/minikube-linux-amd64 start -p multinode-20210507220651-391940 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker --container-runtime=containerd E0507 22:06:59.411601 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory E0507 22:06:59.416897 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory E0507 22:06:59.427132 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory E0507 22:06:59.447365 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory E0507 22:06:59.487609 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory E0507 22:06:59.567883 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory E0507 22:06:59.728257 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory E0507 22:07:00.048981 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory E0507 22:07:00.689932 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory E0507 22:07:01.970967 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory E0507 22:07:04.531539 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory E0507 22:07:09.652146 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory E0507 22:07:19.893083 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory E0507 22:07:40.373219 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory E0507 22:08:17.776899 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory E0507 22:08:21.334151 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory E0507 22:09:43.255194 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory multinode_test.go:76: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210507220651-391940 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker --container-runtime=containerd: (2m56.887864432s) multinode_test.go:82: (dbg) Run: out/minikube-linux-amd64 -p multinode-20210507220651-391940 status --alsologtostderr === RUN TestMultiNode/serial/DeployApp2Nodes multinode_test.go:404: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20210507220651-391940 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml multinode_test.go:409: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20210507220651-391940 -- rollout status deployment/busybox multinode_test.go:409: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20210507220651-391940 -- rollout status deployment/busybox: (2.285887306s) multinode_test.go:415: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20210507220651-391940 -- get pods -o jsonpath='{.items[*].status.podIP}' multinode_test.go:427: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20210507220651-391940 -- get pods -o jsonpath='{.items[*].metadata.name}' multinode_test.go:435: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20210507220651-391940 -- exec busybox-6cd5ff77cb-n6bwx -- nslookup kubernetes.io multinode_test.go:435: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20210507220651-391940 -- exec busybox-6cd5ff77cb-w7j2s -- nslookup kubernetes.io multinode_test.go:444: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20210507220651-391940 -- exec busybox-6cd5ff77cb-n6bwx -- nslookup kubernetes.default multinode_test.go:444: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20210507220651-391940 -- exec busybox-6cd5ff77cb-w7j2s -- nslookup kubernetes.default multinode_test.go:451: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20210507220651-391940 -- exec busybox-6cd5ff77cb-n6bwx -- nslookup kubernetes.default.svc.cluster.local multinode_test.go:451: (dbg) Run: out/minikube-linux-amd64 kubectl -p multinode-20210507220651-391940 -- exec busybox-6cd5ff77cb-w7j2s -- nslookup kubernetes.default.svc.cluster.local === RUN TestMultiNode/serial/AddNode multinode_test.go:101: (dbg) Run: out/minikube-linux-amd64 node add -p multinode-20210507220651-391940 -v 3 --alsologtostderr multinode_test.go:101: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20210507220651-391940 -v 3 --alsologtostderr: (42.866523588s) multinode_test.go:107: (dbg) Run: out/minikube-linux-amd64 -p multinode-20210507220651-391940 status --alsologtostderr === RUN TestMultiNode/serial/ProfileList multinode_test.go:123: (dbg) Run: out/minikube-linux-amd64 profile list --output json === RUN TestMultiNode/serial/StopNode multinode_test.go:163: (dbg) Run: out/minikube-linux-amd64 -p multinode-20210507220651-391940 node stop m03 multinode_test.go:163: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210507220651-391940 node stop m03: (1.319644578s) multinode_test.go:169: (dbg) Run: out/minikube-linux-amd64 -p multinode-20210507220651-391940 status multinode_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210507220651-391940 status: exit status 7 (573.975788ms) -- stdout -- multinode-20210507220651-391940 type: Control Plane host: Running kubelet: Running apiserver: Running kubeconfig: Configured multinode-20210507220651-391940-m02 type: Worker host: Running kubelet: Running multinode-20210507220651-391940-m03 type: Worker host: Stopped kubelet: Stopped -- /stdout -- multinode_test.go:176: (dbg) Run: out/minikube-linux-amd64 -p multinode-20210507220651-391940 status --alsologtostderr multinode_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210507220651-391940 status --alsologtostderr: exit status 7 (557.394369ms) -- stdout -- multinode-20210507220651-391940 type: Control Plane host: Running kubelet: Running apiserver: Running kubeconfig: Configured multinode-20210507220651-391940-m02 type: Worker host: Running kubelet: Running multinode-20210507220651-391940-m03 type: Worker host: Stopped kubelet: Stopped -- /stdout -- ** stderr ** I0507 22:10:38.940588 483365 out.go:291] Setting OutFile to fd 1 ... I0507 22:10:38.940697 483365 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:10:38.940708 483365 out.go:304] Setting ErrFile to fd 2... I0507 22:10:38.940712 483365 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:10:38.940808 483365 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/bin I0507 22:10:38.940996 483365 out.go:298] Setting JSON to false I0507 22:10:38.941019 483365 mustload.go:65] Loading cluster: multinode-20210507220651-391940 I0507 22:10:38.941280 483365 status.go:253] checking status of multinode-20210507220651-391940 ... I0507 22:10:38.941701 483365 cli_runner.go:115] Run: docker container inspect multinode-20210507220651-391940 --format={{.State.Status}} I0507 22:10:38.981126 483365 status.go:328] multinode-20210507220651-391940 host status = "Running" (err=) I0507 22:10:38.981152 483365 host.go:66] Checking if "multinode-20210507220651-391940" exists ... I0507 22:10:38.981390 483365 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210507220651-391940 I0507 22:10:39.017933 483365 host.go:66] Checking if "multinode-20210507220651-391940" exists ... I0507 22:10:39.018235 483365 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0507 22:10:39.018279 483365 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210507220651-391940 I0507 22:10:39.055325 483365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/multinode-20210507220651-391940/id_rsa Username:docker} I0507 22:10:39.143868 483365 ssh_runner.go:149] Run: systemctl --version I0507 22:10:39.147498 483365 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0507 22:10:39.156783 483365 kubeconfig.go:93] found "multinode-20210507220651-391940" server: "https://192.168.58.2:8443" I0507 22:10:39.156807 483365 api_server.go:148] Checking apiserver status ... I0507 22:10:39.156834 483365 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0507 22:10:39.174718 483365 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1073/cgroup I0507 22:10:39.181377 483365 api_server.go:164] apiserver freezer: "8:freezer:/docker/3e87985ac3319dca529cb14dfcc5c6349eb3e4097a5de78dccbe1a9c1d6c88ed/kubepods/burstable/pod2646d64d150a66972245a0ba74a26943/8fa65ea5262a6cbb08af051947d06beed1a98ca396d482b6153b5102688378df" I0507 22:10:39.181428 483365 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/3e87985ac3319dca529cb14dfcc5c6349eb3e4097a5de78dccbe1a9c1d6c88ed/kubepods/burstable/pod2646d64d150a66972245a0ba74a26943/8fa65ea5262a6cbb08af051947d06beed1a98ca396d482b6153b5102688378df/freezer.state I0507 22:10:39.187199 483365 api_server.go:186] freezer state: "THAWED" I0507 22:10:39.187248 483365 api_server.go:223] Checking apiserver healthz at https://192.168.58.2:8443/healthz ... I0507 22:10:39.191939 483365 api_server.go:249] https://192.168.58.2:8443/healthz returned 200: ok I0507 22:10:39.191958 483365 status.go:419] multinode-20210507220651-391940 apiserver status = Running (err=) I0507 22:10:39.191967 483365 status.go:255] multinode-20210507220651-391940 status: &{Name:multinode-20210507220651-391940 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:} I0507 22:10:39.191987 483365 status.go:253] checking status of multinode-20210507220651-391940-m02 ... I0507 22:10:39.192211 483365 cli_runner.go:115] Run: docker container inspect multinode-20210507220651-391940-m02 --format={{.State.Status}} I0507 22:10:39.229873 483365 status.go:328] multinode-20210507220651-391940-m02 host status = "Running" (err=) I0507 22:10:39.229898 483365 host.go:66] Checking if "multinode-20210507220651-391940-m02" exists ... I0507 22:10:39.230165 483365 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210507220651-391940-m02 I0507 22:10:39.267382 483365 host.go:66] Checking if "multinode-20210507220651-391940-m02" exists ... I0507 22:10:39.267704 483365 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0507 22:10:39.267749 483365 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210507220651-391940-m02 I0507 22:10:39.304365 483365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/multinode-20210507220651-391940-m02/id_rsa Username:docker} I0507 22:10:39.391615 483365 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0507 22:10:39.400056 483365 status.go:255] multinode-20210507220651-391940-m02 status: &{Name:multinode-20210507220651-391940-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:} I0507 22:10:39.400089 483365 status.go:253] checking status of multinode-20210507220651-391940-m03 ... I0507 22:10:39.400347 483365 cli_runner.go:115] Run: docker container inspect multinode-20210507220651-391940-m03 --format={{.State.Status}} I0507 22:10:39.438708 483365 status.go:328] multinode-20210507220651-391940-m03 host status = "Stopped" (err=) I0507 22:10:39.438734 483365 status.go:341] host is not running, skipping remaining checks I0507 22:10:39.438739 483365 status.go:255] multinode-20210507220651-391940-m03 status: &{Name:multinode-20210507220651-391940-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:} ** /stderr ** === RUN TestMultiNode/serial/StartAfterStop multinode_test.go:197: (dbg) Run: docker version -f {{.Server.Version}} multinode_test.go:207: (dbg) Run: out/minikube-linux-amd64 -p multinode-20210507220651-391940 node start m03 --alsologtostderr multinode_test.go:207: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210507220651-391940 node start m03 --alsologtostderr: (34.635659666s) multinode_test.go:214: (dbg) Run: out/minikube-linux-amd64 -p multinode-20210507220651-391940 status multinode_test.go:228: (dbg) Run: kubectl get nodes === RUN TestMultiNode/serial/DeleteNode multinode_test.go:317: (dbg) Run: out/minikube-linux-amd64 -p multinode-20210507220651-391940 node delete m03 multinode_test.go:317: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210507220651-391940 node delete m03: (4.802425419s) multinode_test.go:323: (dbg) Run: out/minikube-linux-amd64 -p multinode-20210507220651-391940 status --alsologtostderr multinode_test.go:337: (dbg) Run: docker volume ls multinode_test.go:347: (dbg) Run: kubectl get nodes multinode_test.go:355: (dbg) Run: kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'" === RUN TestMultiNode/serial/StopMultiNode multinode_test.go:237: (dbg) Run: out/minikube-linux-amd64 -p multinode-20210507220651-391940 stop E0507 22:11:59.412377 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory multinode_test.go:237: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210507220651-391940 stop: (41.284314891s) multinode_test.go:243: (dbg) Run: out/minikube-linux-amd64 -p multinode-20210507220651-391940 status multinode_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210507220651-391940 status: exit status 7 (137.348592ms) -- stdout -- multinode-20210507220651-391940 type: Control Plane host: Stopped kubelet: Stopped apiserver: Stopped kubeconfig: Stopped multinode-20210507220651-391940-m02 type: Worker host: Stopped kubelet: Stopped -- /stdout -- multinode_test.go:250: (dbg) Run: out/minikube-linux-amd64 -p multinode-20210507220651-391940 status --alsologtostderr multinode_test.go:250: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210507220651-391940 status --alsologtostderr: exit status 7 (130.59219ms) -- stdout -- multinode-20210507220651-391940 type: Control Plane host: Stopped kubelet: Stopped apiserver: Stopped kubeconfig: Stopped multinode-20210507220651-391940-m02 type: Worker host: Stopped kubelet: Stopped -- /stdout -- ** stderr ** I0507 22:12:01.858361 487075 out.go:291] Setting OutFile to fd 1 ... I0507 22:12:01.858536 487075 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:12:01.858545 487075 out.go:304] Setting ErrFile to fd 2... I0507 22:12:01.858548 487075 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:12:01.858633 487075 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/bin I0507 22:12:01.858776 487075 out.go:298] Setting JSON to false I0507 22:12:01.858795 487075 mustload.go:65] Loading cluster: multinode-20210507220651-391940 I0507 22:12:01.859033 487075 status.go:253] checking status of multinode-20210507220651-391940 ... I0507 22:12:01.859384 487075 cli_runner.go:115] Run: docker container inspect multinode-20210507220651-391940 --format={{.State.Status}} I0507 22:12:01.896299 487075 status.go:328] multinode-20210507220651-391940 host status = "Stopped" (err=) I0507 22:12:01.896335 487075 status.go:341] host is not running, skipping remaining checks I0507 22:12:01.896341 487075 status.go:255] multinode-20210507220651-391940 status: &{Name:multinode-20210507220651-391940 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:} I0507 22:12:01.896395 487075 status.go:253] checking status of multinode-20210507220651-391940-m02 ... I0507 22:12:01.896641 487075 cli_runner.go:115] Run: docker container inspect multinode-20210507220651-391940-m02 --format={{.State.Status}} I0507 22:12:01.932850 487075 status.go:328] multinode-20210507220651-391940-m02 host status = "Stopped" (err=) I0507 22:12:01.932871 487075 status.go:341] host is not running, skipping remaining checks I0507 22:12:01.932877 487075 status.go:255] multinode-20210507220651-391940-m02 status: &{Name:multinode-20210507220651-391940-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:} ** /stderr ** === RUN TestMultiNode/serial/RestartMultiNode multinode_test.go:267: (dbg) Run: docker version -f {{.Server.Version}} multinode_test.go:277: (dbg) Run: out/minikube-linux-amd64 start -p multinode-20210507220651-391940 --wait=true -v=8 --alsologtostderr --driver=docker --container-runtime=containerd E0507 22:12:27.095937 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory E0507 22:13:17.776410 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory multinode_test.go:277: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210507220651-391940 --wait=true -v=8 --alsologtostderr --driver=docker --container-runtime=containerd: (2m29.1286647s) multinode_test.go:283: (dbg) Run: out/minikube-linux-amd64 -p multinode-20210507220651-391940 status --alsologtostderr multinode_test.go:297: (dbg) Run: kubectl get nodes multinode_test.go:305: (dbg) Run: kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'" === RUN TestMultiNode/serial/ValidateNameConflict multinode_test.go:366: (dbg) Run: out/minikube-linux-amd64 node list -p multinode-20210507220651-391940 multinode_test.go:375: (dbg) Run: out/minikube-linux-amd64 start -p multinode-20210507220651-391940-m02 --driver=docker --container-runtime=containerd multinode_test.go:375: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20210507220651-391940-m02 --driver=docker --container-runtime=containerd: exit status 14 (105.907871ms) -- stdout -- * [multinode-20210507220651-391940-m02] minikube v1.20.0 on Debian 9.13 (kvm/amd64) - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig - MINIKUBE_BIN=out/minikube-linux-amd64 - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube - MINIKUBE_LOCATION=master -- /stdout -- ** stderr ** ! Profile name 'multinode-20210507220651-391940-m02' is duplicated with machine name 'multinode-20210507220651-391940-m02' in profile 'multinode-20210507220651-391940' X Exiting due to MK_USAGE: Profile name should be unique ** /stderr ** multinode_test.go:383: (dbg) Run: out/minikube-linux-amd64 start -p multinode-20210507220651-391940-m03 --driver=docker --container-runtime=containerd E0507 22:14:40.821140 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory multinode_test.go:383: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210507220651-391940-m03 --driver=docker --container-runtime=containerd: (46.645382456s) multinode_test.go:390: (dbg) Run: out/minikube-linux-amd64 node add -p multinode-20210507220651-391940 multinode_test.go:390: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20210507220651-391940: exit status 80 (274.51484ms) -- stdout -- * Adding node m03 to cluster multinode-20210507220651-391940 -- /stdout -- ** stderr ** X Exiting due to GUEST_NODE_ADD: Node multinode-20210507220651-391940-m03 already exists in multinode-20210507220651-391940-m03 profile * ╭─────────────────────────────────────────────────────────────────────────────╮ │ │ │ * If the above advice does not help, please let us know: │ │ https://github.com/kubernetes/minikube/issues/new/choose │ │ │ │ * Please attach the following file to the GitHub issue: │ │ * - /tmp/minikube_node_5d50ea0fe0ecd435d89f51fbcdcec837640ed6a1_0.log │ │ │ ╰─────────────────────────────────────────────────────────────────────────────╯ ** /stderr ** multinode_test.go:395: (dbg) Run: out/minikube-linux-amd64 delete -p multinode-20210507220651-391940-m03 multinode_test.go:395: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20210507220651-391940-m03: (2.296185405s) === CONT TestMultiNode helpers_test.go:171: Cleaning up "multinode-20210507220651-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p multinode-20210507220651-391940 helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20210507220651-391940: (4.764861723s) --- PASS: TestMultiNode (514.57s) --- PASS: TestMultiNode/serial (509.80s) --- PASS: TestMultiNode/serial/FreshStart2Nodes (177.40s) --- PASS: TestMultiNode/serial/DeployApp2Nodes (4.35s) --- PASS: TestMultiNode/serial/AddNode (43.59s) --- PASS: TestMultiNode/serial/ProfileList (0.30s) --- PASS: TestMultiNode/serial/StopNode (2.45s) --- PASS: TestMultiNode/serial/StartAfterStop (35.46s) --- PASS: TestMultiNode/serial/DeleteNode (5.48s) --- PASS: TestMultiNode/serial/StopMultiNode (41.55s) --- PASS: TestMultiNode/serial/RestartMultiNode (149.83s) --- PASS: TestMultiNode/serial/ValidateNameConflict (49.38s) === RUN TestNetworkPlugins === PAUSE TestNetworkPlugins === RUN TestChangeNoneUser none_test.go:39: Only test none driver. --- SKIP: TestChangeNoneUser (0.00s) === RUN TestPause === PAUSE TestPause === RUN TestDebPackageInstall pkg_install_test.go:50: (dbg) Run: docker version === RUN TestDebPackageInstall/install_amd64_debian:sid === RUN TestDebPackageInstall/install_amd64_debian:sid/minikube === RUN TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver pkg_install_test.go:104: (dbg) Run: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb" pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (10.685124162s) === RUN TestDebPackageInstall/install_amd64_debian:latest === RUN TestDebPackageInstall/install_amd64_debian:latest/minikube === RUN TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver pkg_install_test.go:104: (dbg) Run: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb" pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (9.786481117s) === RUN TestDebPackageInstall/install_amd64_debian:10 === RUN TestDebPackageInstall/install_amd64_debian:10/minikube === RUN TestDebPackageInstall/install_amd64_debian:10/kvm2-driver pkg_install_test.go:104: (dbg) Run: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb" pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (9.721428033s) === RUN TestDebPackageInstall/install_amd64_debian:9 === RUN TestDebPackageInstall/install_amd64_debian:9/minikube === RUN TestDebPackageInstall/install_amd64_debian:9/kvm2-driver pkg_install_test.go:104: (dbg) Run: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb" pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (8.112034527s) === RUN TestDebPackageInstall/install_amd64_ubuntu:latest === RUN TestDebPackageInstall/install_amd64_ubuntu:latest/minikube === RUN TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver pkg_install_test.go:104: (dbg) Run: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb" pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (16.028776212s) === RUN TestDebPackageInstall/install_amd64_ubuntu:20.10 === RUN TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube === RUN TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver pkg_install_test.go:104: (dbg) Run: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb" pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (13.331297713s) === RUN TestDebPackageInstall/install_amd64_ubuntu:20.04 === RUN TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube === RUN TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver pkg_install_test.go:104: (dbg) Run: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb" pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (14.524606265s) === RUN TestDebPackageInstall/install_amd64_ubuntu:18.04 === RUN TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube === RUN TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver pkg_install_test.go:104: (dbg) Run: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb" E0507 22:16:59.411352 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (12.61121609s) --- PASS: TestDebPackageInstall (94.85s) --- PASS: TestDebPackageInstall/install_amd64_debian:sid (10.69s) --- PASS: TestDebPackageInstall/install_amd64_debian:sid/minikube (0.00s) --- PASS: TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (10.69s) --- PASS: TestDebPackageInstall/install_amd64_debian:latest (9.79s) --- PASS: TestDebPackageInstall/install_amd64_debian:latest/minikube (0.00s) --- PASS: TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (9.79s) --- PASS: TestDebPackageInstall/install_amd64_debian:10 (9.72s) --- PASS: TestDebPackageInstall/install_amd64_debian:10/minikube (0.00s) --- PASS: TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (9.72s) --- PASS: TestDebPackageInstall/install_amd64_debian:9 (8.11s) --- PASS: TestDebPackageInstall/install_amd64_debian:9/minikube (0.00s) --- PASS: TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.11s) --- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest (16.03s) --- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0.00s) --- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (16.03s) --- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10 (13.33s) --- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0.00s) --- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (13.33s) --- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04 (14.52s) --- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0.00s) --- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (14.52s) --- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04 (12.61s) --- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0.00s) --- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (12.61s) === RUN TestPreload preload_test.go:48: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-20210507221700-391940 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.17.0 E0507 22:18:17.776519 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210507221700-391940 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.17.0: (1m29.52970925s) preload_test.go:61: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-20210507221700-391940 -- sudo crictl pull busybox preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20210507221700-391940 -- sudo crictl pull busybox: (1.116569677s) preload_test.go:71: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-20210507221700-391940 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --container-runtime=containerd --kubernetes-version=v1.17.3 preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210507221700-391940 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --container-runtime=containerd --kubernetes-version=v1.17.3: (39.447351953s) preload_test.go:80: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-20210507221700-391940 -- sudo crictl image ls helpers_test.go:171: Cleaning up "test-preload-20210507221700-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p test-preload-20210507221700-391940 helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20210507221700-391940: (2.789211788s) --- PASS: TestPreload (133.16s) === RUN TestScheduledStopWindows scheduled_stop_test.go:43: test only runs on windows --- SKIP: TestScheduledStopWindows (0.00s) === RUN TestScheduledStopUnix scheduled_stop_test.go:126: (dbg) Run: out/minikube-linux-amd64 start -p scheduled-stop-20210507221913-391940 --memory=2048 --driver=docker --container-runtime=containerd scheduled_stop_test.go:126: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20210507221913-391940 --memory=2048 --driver=docker --container-runtime=containerd: (46.879518455s) scheduled_stop_test.go:135: (dbg) Run: out/minikube-linux-amd64 stop -p scheduled-stop-20210507221913-391940 --schedule 5m scheduled_stop_test.go:189: (dbg) Run: out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20210507221913-391940 -n scheduled-stop-20210507221913-391940 scheduled_stop_test.go:167: signal error was: scheduled_stop_test.go:135: (dbg) Run: out/minikube-linux-amd64 stop -p scheduled-stop-20210507221913-391940 --schedule 8s scheduled_stop_test.go:167: signal error was: os: process already finished scheduled_stop_test.go:135: (dbg) Run: out/minikube-linux-amd64 stop -p scheduled-stop-20210507221913-391940 --cancel-scheduled scheduled_stop_test.go:174: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210507221913-391940 -n scheduled-stop-20210507221913-391940 scheduled_stop_test.go:203: (dbg) Run: out/minikube-linux-amd64 status -p scheduled-stop-20210507221913-391940 scheduled_stop_test.go:135: (dbg) Run: out/minikube-linux-amd64 stop -p scheduled-stop-20210507221913-391940 --schedule 5s scheduled_stop_test.go:167: signal error was: os: process already finished scheduled_stop_test.go:203: (dbg) Run: out/minikube-linux-amd64 status -p scheduled-stop-20210507221913-391940 scheduled_stop_test.go:174: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210507221913-391940 -n scheduled-stop-20210507221913-391940 scheduled_stop_test.go:174: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210507221913-391940 -n scheduled-stop-20210507221913-391940 scheduled_stop_test.go:174: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210507221913-391940 -n scheduled-stop-20210507221913-391940 scheduled_stop_test.go:174: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210507221913-391940 -n scheduled-stop-20210507221913-391940 scheduled_stop_test.go:174: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210507221913-391940 -n scheduled-stop-20210507221913-391940 scheduled_stop_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210507221913-391940 -n scheduled-stop-20210507221913-391940: exit status 7 (97.064306ms) -- stdout -- Stopped -- /stdout -- scheduled_stop_test.go:174: status error: exit status 7 (may be ok) helpers_test.go:171: Cleaning up "scheduled-stop-20210507221913-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p scheduled-stop-20210507221913-391940 helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20210507221913-391940: (2.106647556s) --- PASS: TestScheduledStopUnix (71.25s) === RUN TestSkaffold skaffold_test.go:43: skaffold requires docker-env, currently testing containerd container runtime --- SKIP: TestSkaffold (0.00s) === RUN TestStartStop === PAUSE TestStartStop === RUN TestInsufficientStorage status_test.go:50: (dbg) Run: out/minikube-linux-amd64 start -p insufficient-storage-20210507222025-391940 --memory=2048 --output=json --wait=true --driver=docker --container-runtime=containerd status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20210507222025-391940 --memory=2048 --output=json --wait=true --driver=docker --container-runtime=containerd: exit status 26 (6.066984227s) -- stdout -- {"data":{"currentstep":"0","message":"[insufficient-storage-20210507222025-391940] minikube v1.20.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"a2d35724-0360-4159-ab8c-8e8c11cf2d8b","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"} {"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig"},"datacontenttype":"application/json","id":"306106c7-ebcc-40d5-9a9c-94f38a03e85d","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"} {"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"e254b9cd-53d2-4bf1-aabc-ab7f626758aa","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"} {"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube"},"datacontenttype":"application/json","id":"a75ce941-af16-40ab-88b0-fc4cb4954fde","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"} {"data":{"message":"MINIKUBE_LOCATION=master"},"datacontenttype":"application/json","id":"ff6bf701-7039-408b-92fb-7a3c726c45a5","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"} {"data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"},"datacontenttype":"application/json","id":"9a99ef5f-99ed-4658-bf9b-afd208aa3311","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"} {"data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"},"datacontenttype":"application/json","id":"3ed296c9-790e-4bc4-8870-d631a1d4932f","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"} {"data":{"message":"Your cgroup does not allow setting memory."},"datacontenttype":"application/json","id":"05aed3f7-3dcc-4263-8852-e4f552f0b240","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.warning"} {"data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"},"datacontenttype":"application/json","id":"132aba9c-1e97-47a5-85b4-cebe617df81f","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"} {"data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20210507222025-391940 in cluster insufficient-storage-20210507222025-391940","name":"Starting Node","totalsteps":"19"},"datacontenttype":"application/json","id":"5dfe4c8d-95cd-43ee-a475-c310fc4911cb","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"} {"data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"},"datacontenttype":"application/json","id":"18182f99-cbee-4916-8948-41d0e3a75672","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"} {"data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"},"datacontenttype":"application/json","id":"26614e49-178d-4102-9237-70de4c917dd9","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"} {"data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""},"datacontenttype":"application/json","id":"d9cbe3c5-0e58-45a6-acaa-0d106d8707fe","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"} -- /stdout -- status_test.go:76: (dbg) Run: out/minikube-linux-amd64 status -p insufficient-storage-20210507222025-391940 --output=json --layout=cluster status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20210507222025-391940 --output=json --layout=cluster: exit status 7 (275.405587ms) -- stdout -- {"Name":"insufficient-storage-20210507222025-391940","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.20.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210507222025-391940","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]} -- /stdout -- ** stderr ** E0507 22:20:31.507757 530543 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210507222025-391940" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig ** /stderr ** status_test.go:76: (dbg) Run: out/minikube-linux-amd64 status -p insufficient-storage-20210507222025-391940 --output=json --layout=cluster status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20210507222025-391940 --output=json --layout=cluster: exit status 7 (275.476286ms) -- stdout -- {"Name":"insufficient-storage-20210507222025-391940","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.20.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210507222025-391940","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]} -- /stdout -- ** stderr ** E0507 22:20:31.784164 530602 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210507222025-391940" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig E0507 22:20:31.794777 530602 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/insufficient-storage-20210507222025-391940/events.json: no such file or directory ** /stderr ** helpers_test.go:171: Cleaning up "insufficient-storage-20210507222025-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p insufficient-storage-20210507222025-391940 helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20210507222025-391940: (2.259848814s) --- PASS: TestInsufficientStorage (8.88s) === RUN TestRunningBinaryUpgrade === PAUSE TestRunningBinaryUpgrade === RUN TestStoppedBinaryUpgrade === PAUSE TestStoppedBinaryUpgrade === RUN TestKubernetesUpgrade === PAUSE TestKubernetesUpgrade === RUN TestMissingContainerUpgrade === PAUSE TestMissingContainerUpgrade === CONT TestOffline === CONT TestStartStop === CONT TestKubernetesUpgrade === RUN TestStartStop/group === RUN TestStartStop/group/old-k8s-version === CONT TestKubernetesUpgrade version_upgrade_test.go:227: (dbg) Run: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210507222034-391940 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd === PAUSE TestStartStop/group/old-k8s-version === CONT TestOffline aab_offline_test.go:55: (dbg) Run: out/minikube-linux-amd64 start -p offline-containerd-20210507222034-391940 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker --container-runtime=containerd === CONT TestPause === RUN TestPause/serial === RUN TestPause/serial/Start === RUN TestStartStop/group/newest-cni === PAUSE TestStartStop/group/newest-cni === CONT TestPause/serial/Start pause_test.go:77: (dbg) Run: out/minikube-linux-amd64 start -p pause-20210507222034-391940 --memory=2048 --install-addons=false --wait=all --driver=docker --container-runtime=containerd === RUN TestStartStop/group/default-k8s-different-port === PAUSE TestStartStop/group/default-k8s-different-port === RUN TestStartStop/group/no-preload === PAUSE TestStartStop/group/no-preload === RUN TestStartStop/group/disable-driver-mounts === PAUSE TestStartStop/group/disable-driver-mounts === RUN TestStartStop/group/embed-certs === PAUSE TestStartStop/group/embed-certs === CONT TestNetworkPlugins === RUN TestNetworkPlugins/group === RUN TestNetworkPlugins/group/auto === PAUSE TestNetworkPlugins/group/auto === RUN TestNetworkPlugins/group/kubenet === PAUSE TestNetworkPlugins/group/kubenet === RUN TestNetworkPlugins/group/bridge === PAUSE TestNetworkPlugins/group/bridge === RUN TestNetworkPlugins/group/enable-default-cni === PAUSE TestNetworkPlugins/group/enable-default-cni === RUN TestNetworkPlugins/group/flannel net_test.go:69: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory === RUN TestNetworkPlugins/group/kindnet === PAUSE TestNetworkPlugins/group/kindnet === RUN TestNetworkPlugins/group/false === PAUSE TestNetworkPlugins/group/false === RUN TestNetworkPlugins/group/custom-weave === PAUSE TestNetworkPlugins/group/custom-weave === RUN TestNetworkPlugins/group/calico === PAUSE TestNetworkPlugins/group/calico === RUN TestNetworkPlugins/group/cilium === PAUSE TestNetworkPlugins/group/cilium === CONT TestForceSystemdFlag docker_test.go:85: (dbg) Run: out/minikube-linux-amd64 start -p force-systemd-flag-20210507222034-391940 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20210507222034-391940 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd: (1m7.782317233s) docker_test.go:113: (dbg) Run: out/minikube-linux-amd64 -p force-systemd-flag-20210507222034-391940 ssh "cat /etc/containerd/config.toml" helpers_test.go:171: Cleaning up "force-systemd-flag-20210507222034-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p force-systemd-flag-20210507222034-391940 helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20210507222034-391940: (2.520936965s) --- PASS: TestForceSystemdFlag (70.58s) === CONT TestCertOptions cert_options_test.go:47: (dbg) Run: out/minikube-linux-amd64 start -p cert-options-20210507222144-391940 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --container-runtime=containerd === CONT TestKubernetesUpgrade version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210507222034-391940 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: (1m13.727755454s) version_upgrade_test.go:232: (dbg) Run: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210507222034-391940 version_upgrade_test.go:232: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210507222034-391940: (1.431800366s) version_upgrade_test.go:237: (dbg) Run: out/minikube-linux-amd64 -p kubernetes-upgrade-20210507222034-391940 status --format={{.Host}} version_upgrade_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20210507222034-391940 status --format={{.Host}}: exit status 7 (109.555476ms) -- stdout -- Stopped -- /stdout -- version_upgrade_test.go:239: status error: exit status 7 (may be ok) version_upgrade_test.go:248: (dbg) Run: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210507222034-391940 --memory=2200 --kubernetes-version=v1.22.0-alpha.1 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd E0507 22:21:59.411227 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory === CONT TestCertOptions cert_options_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20210507222144-391940 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --container-runtime=containerd: (45.773058558s) cert_options_test.go:58: (dbg) Run: out/minikube-linux-amd64 -p cert-options-20210507222144-391940 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt" cert_options_test.go:73: (dbg) Run: kubectl --context cert-options-20210507222144-391940 config view helpers_test.go:171: Cleaning up "cert-options-20210507222144-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p cert-options-20210507222144-391940 helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20210507222144-391940: (2.510124914s) --- PASS: TestCertOptions (48.61s) === CONT TestForceSystemdEnv docker_test.go:136: (dbg) Run: out/minikube-linux-amd64 start -p force-systemd-env-20210507222233-391940 --memory=2048 --alsologtostderr -v=5 --driver=docker --container-runtime=containerd === CONT TestKubernetesUpgrade version_upgrade_test.go:248: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210507222034-391940 --memory=2200 --kubernetes-version=v1.22.0-alpha.1 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: (1m7.134351741s) version_upgrade_test.go:253: (dbg) Run: kubectl --context kubernetes-upgrade-20210507222034-391940 version --output=json version_upgrade_test.go:272: Attempting to downgrade Kubernetes (should fail) version_upgrade_test.go:274: (dbg) Run: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210507222034-391940 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker --container-runtime=containerd version_upgrade_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210507222034-391940 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker --container-runtime=containerd: exit status 106 (126.443112ms) -- stdout -- * [kubernetes-upgrade-20210507222034-391940] minikube v1.20.0 on Debian 9.13 (kvm/amd64) - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig - MINIKUBE_BIN=out/minikube-linux-amd64 - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube - MINIKUBE_LOCATION=master -- /stdout -- ** stderr ** X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.0-alpha.1 cluster to v1.14.0 * Suggestion: 1) Recreate the cluster with Kubernetes 1.14.0, by running: minikube delete -p kubernetes-upgrade-20210507222034-391940 minikube start -p kubernetes-upgrade-20210507222034-391940 --kubernetes-version=v1.14.0 2) Create a second cluster with Kubernetes 1.14.0, by running: minikube start -p kubernetes-upgrade-20210507222034-3919402 --kubernetes-version=v1.14.0 3) Use the existing cluster at version Kubernetes 1.22.0-alpha.1, by running: minikube start -p kubernetes-upgrade-20210507222034-391940 --kubernetes-version=v1.22.0-alpha.1 ** /stderr ** version_upgrade_test.go:278: Attempting restart after unsuccessful downgrade version_upgrade_test.go:280: (dbg) Run: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210507222034-391940 --memory=2200 --kubernetes-version=v1.22.0-alpha.1 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd E0507 22:23:17.777054 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory === CONT TestForceSystemdEnv docker_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20210507222233-391940 --memory=2048 --alsologtostderr -v=5 --driver=docker --container-runtime=containerd: (44.735959164s) docker_test.go:113: (dbg) Run: out/minikube-linux-amd64 -p force-systemd-env-20210507222233-391940 ssh "cat /etc/containerd/config.toml" helpers_test.go:171: Cleaning up "force-systemd-env-20210507222233-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p force-systemd-env-20210507222233-391940 === CONT TestOffline aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20210507222034-391940 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker --container-runtime=containerd: (2m45.853737511s) helpers_test.go:171: Cleaning up "offline-containerd-20210507222034-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p offline-containerd-20210507222034-391940 === CONT TestPause/serial/Start pause_test.go:77: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210507222034-391940 --memory=2048 --install-addons=false --wait=all --driver=docker --container-runtime=containerd: (2m47.335059448s) === RUN TestPause/serial/SecondStartNoReconfiguration pause_test.go:89: (dbg) Run: out/minikube-linux-amd64 start -p pause-20210507222034-391940 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd === CONT TestForceSystemdEnv helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20210507222233-391940: (3.183463089s) --- PASS: TestForceSystemdEnv (48.31s) === CONT TestStoppedBinaryUpgrade version_upgrade_test.go:189: (dbg) Run: /tmp/minikube-v1.8.0.179155049.exe start -p stopped-upgrade-20210507222321-391940 --memory=2200 --vm-driver=docker --container-runtime=containerd E0507 22:23:22.457242 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory === CONT TestOffline helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20210507222034-391940: (3.729093332s) --- PASS: TestOffline (169.58s) === CONT TestRunningBinaryUpgrade version_upgrade_test.go:119: (dbg) Run: /tmp/minikube-v1.9.0.011766915.exe start -p running-upgrade-20210507222323-391940 --memory=2200 --vm-driver=docker --container-runtime=containerd === CONT TestKubernetesUpgrade version_upgrade_test.go:280: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210507222034-391940 --memory=2200 --kubernetes-version=v1.22.0-alpha.1 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: (37.082624006s) helpers_test.go:171: Cleaning up "kubernetes-upgrade-20210507222034-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210507222034-391940 helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210507222034-391940: (3.917881691s) --- PASS: TestKubernetesUpgrade (183.61s) === CONT TestKVMDriverInstallOrUpdate > docker-machine-driver-kvm2....: 65 B / 65 B [----------] 100.00% ? p/s 0s=== CONT TestPause/serial/SecondStartNoReconfiguration pause_test.go:89: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210507222034-391940 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: (17.141134564s) === RUN TestPause/serial/Pause pause_test.go:107: (dbg) Run: out/minikube-linux-amd64 pause -p pause-20210507222034-391940 --alsologtostderr -v=5 > docker-machine-driver-kvm2: 3.56 MiB / 48.57 MiB [>_______] 7.33% ? p/s ? > docker-machine-driver-kvm2: 7.98 MiB / 48.57 MiB [->_____] 16.44% ? p/s ? > docker-machine-driver-kvm2: 12.55 MiB / 48.57 MiB [->____] 25.83% ? p/s ?=== RUN TestPause/serial/VerifyStatus status_test.go:76: (dbg) Run: out/minikube-linux-amd64 status -p pause-20210507222034-391940 --output=json --layout=cluster > docker-machine-driver-kvm2: 17.23 MiB / 48.57 MiB 35.48% 22.65 MiB p/s E > docker-machine-driver-kvm2: 22.14 MiB / 48.57 MiB 45.58% 22.65 MiB p/s E status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20210507222034-391940 --output=json --layout=cluster: exit status 2 (498.056677ms) -- stdout -- {"Name":"pause-20210507222034-391940","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.20.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210507222034-391940","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]} -- /stdout -- === RUN TestPause/serial/Unpause pause_test.go:118: (dbg) Run: out/minikube-linux-amd64 unpause -p pause-20210507222034-391940 --alsologtostderr -v=5 > docker-machine-driver-kvm2: 27.11 MiB / 48.57 MiB 55.81% 22.65 MiB p/s E > docker-machine-driver-kvm2: 32.94 MiB / 48.57 MiB 67.81% 22.89 MiB p/s E > docker-machine-driver-kvm2: 38.58 MiB / 48.57 MiB 79.42% 22.89 MiB p/s E > docker-machine-driver-kvm2: 44.97 MiB / 48.57 MiB 92.58% 22.89 MiB p/s E > docker-machine-driver-kvm2: 48.57 MiB / 48.57 MiB 100.00% 28.35 MiB p/s === RUN TestPause/serial/PauseAgain pause_test.go:107: (dbg) Run: out/minikube-linux-amd64 pause -p pause-20210507222034-391940 --alsologtostderr -v=5 > docker-machine-driver-kvm2....: 65 B / 65 B [----------] 100.00% ? p/s 0s > docker-machine-driver-kvm2: 7.10 MiB / 48.57 MiB [->_____] 14.61% ? p/s ? > docker-machine-driver-kvm2: 27.14 MiB / 48.57 MiB [--->__] 55.88% ? p/s ? > docker-machine-driver-kvm2: 45.88 MiB / 48.57 MiB [----->] 94.45% ? p/s ? > docker-machine-driver-kvm2: 48.57 MiB / 48.57 MiB 100.00% 86.09 MiB p/s --- PASS: TestKVMDriverInstallOrUpdate (4.54s) === CONT TestMissingContainerUpgrade version_upgrade_test.go:314: (dbg) Run: /tmp/minikube-v1.9.1.794961713.exe start -p missing-upgrade-20210507222342-391940 --memory=2200 --driver=docker --container-runtime=containerd === CONT TestPause/serial/PauseAgain pause_test.go:107: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20210507222034-391940 --alsologtostderr -v=5: (7.505324025s) === RUN TestPause/serial/DeletePaused pause_test.go:129: (dbg) Run: out/minikube-linux-amd64 delete -p pause-20210507222034-391940 --alsologtostderr -v=5 === CONT TestStoppedBinaryUpgrade version_upgrade_test.go:189: (dbg) Done: /tmp/minikube-v1.8.0.179155049.exe start -p stopped-upgrade-20210507222321-391940 --memory=2200 --vm-driver=docker --container-runtime=containerd: (1m11.520522377s) version_upgrade_test.go:198: (dbg) Run: /tmp/minikube-v1.8.0.179155049.exe -p stopped-upgrade-20210507222321-391940 stop version_upgrade_test.go:198: (dbg) Done: /tmp/minikube-v1.8.0.179155049.exe -p stopped-upgrade-20210507222321-391940 stop: (10.616160329s) version_upgrade_test.go:204: (dbg) Run: out/minikube-linux-amd64 start -p stopped-upgrade-20210507222321-391940 --memory=2200 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd === CONT TestRunningBinaryUpgrade version_upgrade_test.go:119: (dbg) Done: /tmp/minikube-v1.9.0.011766915.exe start -p running-upgrade-20210507222323-391940 --memory=2200 --vm-driver=docker --container-runtime=containerd: (1m30.608727762s) version_upgrade_test.go:129: (dbg) Run: out/minikube-linux-amd64 start -p running-upgrade-20210507222323-391940 --memory=2200 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd === CONT TestMissingContainerUpgrade version_upgrade_test.go:314: (dbg) Done: /tmp/minikube-v1.9.1.794961713.exe start -p missing-upgrade-20210507222342-391940 --memory=2200 --driver=docker --container-runtime=containerd: (1m17.966144714s) version_upgrade_test.go:323: (dbg) Run: docker stop missing-upgrade-20210507222342-391940 version_upgrade_test.go:323: (dbg) Done: docker stop missing-upgrade-20210507222342-391940: (11.781696688s) version_upgrade_test.go:328: (dbg) Run: docker rm missing-upgrade-20210507222342-391940 version_upgrade_test.go:334: (dbg) Run: out/minikube-linux-amd64 start -p missing-upgrade-20210507222342-391940 --memory=2200 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd === CONT TestStoppedBinaryUpgrade version_upgrade_test.go:204: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20210507222321-391940 --memory=2200 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: (40.371756234s) === RUN TestStoppedBinaryUpgrade/MinikubeLogs version_upgrade_test.go:211: (dbg) Run: out/minikube-linux-amd64 logs -p stopped-upgrade-20210507222321-391940 === CONT TestStoppedBinaryUpgrade helpers_test.go:171: Cleaning up "stopped-upgrade-20210507222321-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p stopped-upgrade-20210507222321-391940 helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p stopped-upgrade-20210507222321-391940: (2.646603745s) --- PASS: TestStoppedBinaryUpgrade (126.30s) --- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.73s) === CONT TestStartStop/group/old-k8s-version === RUN TestStartStop/group/old-k8s-version/serial === RUN TestStartStop/group/old-k8s-version/serial/FirstStart start_stop_delete_test.go:158: (dbg) Run: out/minikube-linux-amd64 start -p old-k8s-version-20210507222527-391940 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.14.0 === CONT TestRunningBinaryUpgrade version_upgrade_test.go:129: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20210507222323-391940 --memory=2200 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: (40.242727636s) helpers_test.go:171: Cleaning up "running-upgrade-20210507222323-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p running-upgrade-20210507222323-391940 helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20210507222323-391940: (2.952446564s) --- PASS: TestRunningBinaryUpgrade (134.17s) === CONT TestStartStop/group/no-preload === RUN TestStartStop/group/no-preload/serial === RUN TestStartStop/group/no-preload/serial/FirstStart start_stop_delete_test.go:158: (dbg) Run: out/minikube-linux-amd64 start -p no-preload-20210507222537-391940 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1 start_stop_delete_test.go:158: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210507222537-391940 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1: (1m14.58259082s) === RUN TestStartStop/group/no-preload/serial/DeployApp start_stop_delete_test.go:168: (dbg) Run: kubectl --context no-preload-20210507222537-391940 create -f testdata/busybox.yaml start_stop_delete_test.go:168: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ... helpers_test.go:335: "busybox" [fe25abc6-bcc0-464a-b2c7-b2a80fd29159] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox]) helpers_test.go:335: "busybox" [fe25abc6-bcc0-464a-b2c7-b2a80fd29159] Running E0507 22:26:59.411708 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory start_stop_delete_test.go:168: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.011127977s start_stop_delete_test.go:168: (dbg) Run: kubectl --context no-preload-20210507222537-391940 exec busybox -- /bin/sh -c "ulimit -n" === RUN TestStartStop/group/no-preload/serial/Stop start_stop_delete_test.go:175: (dbg) Run: out/minikube-linux-amd64 stop -p no-preload-20210507222537-391940 --alsologtostderr -v=3 start_stop_delete_test.go:175: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20210507222537-391940 --alsologtostderr -v=3: (20.662570751s) === RUN TestStartStop/group/no-preload/serial/EnableAddonAfterStop start_stop_delete_test.go:186: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210507222537-391940 -n no-preload-20210507222537-391940 start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210507222537-391940 -n no-preload-20210507222537-391940: exit status 7 (97.309824ms) -- stdout -- Stopped -- /stdout -- start_stop_delete_test.go:186: status error: exit status 7 (may be ok) start_stop_delete_test.go:193: (dbg) Run: out/minikube-linux-amd64 addons enable dashboard -p no-preload-20210507222537-391940 === RUN TestStartStop/group/no-preload/serial/SecondStart start_stop_delete_test.go:203: (dbg) Run: out/minikube-linux-amd64 start -p no-preload-20210507222537-391940 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1 === CONT TestStartStop/group/old-k8s-version/serial/FirstStart start_stop_delete_test.go:158: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210507222527-391940 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.14.0: (2m12.550892884s) === RUN TestStartStop/group/old-k8s-version/serial/DeployApp start_stop_delete_test.go:168: (dbg) Run: kubectl --context old-k8s-version-20210507222527-391940 create -f testdata/busybox.yaml start_stop_delete_test.go:168: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ... helpers_test.go:335: "busybox" [6f550da1-af83-11eb-988e-0242ee37c829] Pending helpers_test.go:335: "busybox" [6f550da1-af83-11eb-988e-0242ee37c829] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox]) helpers_test.go:335: "busybox" [6f550da1-af83-11eb-988e-0242ee37c829] Running start_stop_delete_test.go:168: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.019723608s start_stop_delete_test.go:168: (dbg) Run: kubectl --context old-k8s-version-20210507222527-391940 exec busybox -- /bin/sh -c "ulimit -n" === RUN TestStartStop/group/old-k8s-version/serial/Stop start_stop_delete_test.go:175: (dbg) Run: out/minikube-linux-amd64 stop -p old-k8s-version-20210507222527-391940 --alsologtostderr -v=3 start_stop_delete_test.go:175: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20210507222527-391940 --alsologtostderr -v=3: (20.882399405s) === RUN TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop start_stop_delete_test.go:186: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210507222527-391940 -n old-k8s-version-20210507222527-391940 start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210507222527-391940 -n old-k8s-version-20210507222527-391940: exit status 7 (97.724484ms) -- stdout -- Stopped -- /stdout -- start_stop_delete_test.go:186: status error: exit status 7 (may be ok) start_stop_delete_test.go:193: (dbg) Run: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20210507222527-391940 === RUN TestStartStop/group/old-k8s-version/serial/SecondStart start_stop_delete_test.go:203: (dbg) Run: out/minikube-linux-amd64 start -p old-k8s-version-20210507222527-391940 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.14.0 E0507 22:28:17.776445 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory === CONT TestStartStop/group/no-preload/serial/SecondStart start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210507222537-391940 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1: (1m9.725013154s) start_stop_delete_test.go:209: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210507222537-391940 -n no-preload-20210507222537-391940 === RUN TestStartStop/group/no-preload/serial/UserAppExistsAfterStop start_stop_delete_test.go:221: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ... helpers_test.go:335: "kubernetes-dashboard-6fcdf4f6d-vc47l" [4dfc986b-a70c-4a01-9e24-4770a3a6b392] Running start_stop_delete_test.go:221: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012708023s === RUN TestStartStop/group/no-preload/serial/AddonExistsAfterStop start_stop_delete_test.go:232: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ... helpers_test.go:335: "kubernetes-dashboard-6fcdf4f6d-vc47l" [4dfc986b-a70c-4a01-9e24-4770a3a6b392] Running start_stop_delete_test.go:232: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007515954s === RUN TestStartStop/group/no-preload/serial/VerifyKubernetesImages start_stop_delete_test.go:240: (dbg) Run: out/minikube-linux-amd64 ssh -p no-preload-20210507222537-391940 "sudo crictl images -o json" start_stop_delete_test.go:240: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5 start_stop_delete_test.go:240: Found non-minikube image: library/busybox:1.28.4-glibc start_stop_delete_test.go:240: Found non-minikube image: library/minikube-local-cache-test:functional-20210507215728-391940 === RUN TestStartStop/group/no-preload/serial/Pause start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 pause -p no-preload-20210507222537-391940 --alsologtostderr -v=1 start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210507222537-391940 -n no-preload-20210507222537-391940 start_stop_delete_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210507222537-391940 -n no-preload-20210507222537-391940: exit status 2 (307.149264ms) -- stdout -- Paused -- /stdout -- start_stop_delete_test.go:247: status error: exit status 2 (may be ok) start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210507222537-391940 -n no-preload-20210507222537-391940 start_stop_delete_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210507222537-391940 -n no-preload-20210507222537-391940: exit status 2 (324.146903ms) -- stdout -- Stopped -- /stdout -- start_stop_delete_test.go:247: status error: exit status 2 (may be ok) start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 unpause -p no-preload-20210507222537-391940 --alsologtostderr -v=1 start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210507222537-391940 -n no-preload-20210507222537-391940 start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210507222537-391940 -n no-preload-20210507222537-391940 === CONT TestStartStop/group/no-preload/serial start_stop_delete_test.go:134: (dbg) Run: out/minikube-linux-amd64 delete -p no-preload-20210507222537-391940 start_stop_delete_test.go:134: (dbg) Done: out/minikube-linux-amd64 delete -p no-preload-20210507222537-391940: (3.03626136s) start_stop_delete_test.go:139: (dbg) Run: kubectl config get-contexts no-preload-20210507222537-391940 start_stop_delete_test.go:139: (dbg) Non-zero exit: kubectl config get-contexts no-preload-20210507222537-391940: exit status 1 (49.877978ms) -- stdout -- CURRENT NAME CLUSTER AUTHINFO NAMESPACE -- /stdout -- ** stderr ** error: context no-preload-20210507222537-391940 not found ** /stderr ** start_stop_delete_test.go:141: config context error: exit status 1 (may be ok) === CONT TestStartStop/group/no-preload helpers_test.go:171: Cleaning up "no-preload-20210507222537-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p no-preload-20210507222537-391940 === CONT TestStartStop/group/embed-certs === RUN TestStartStop/group/embed-certs/serial === RUN TestStartStop/group/embed-certs/serial/FirstStart start_stop_delete_test.go:158: (dbg) Run: out/minikube-linux-amd64 start -p embed-certs-20210507222849-391940 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.2 === CONT TestMissingContainerUpgrade version_upgrade_test.go:334: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20210507222342-391940 --memory=2200 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: (4m25.9425817s) helpers_test.go:171: Cleaning up "missing-upgrade-20210507222342-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p missing-upgrade-20210507222342-391940 helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20210507222342-391940: (3.145675707s) --- PASS: TestMissingContainerUpgrade (359.29s) === CONT TestStartStop/group/disable-driver-mounts start_stop_delete_test.go:91: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox helpers_test.go:171: Cleaning up "disable-driver-mounts-20210507222941-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p disable-driver-mounts-20210507222941-391940 === CONT TestStartStop/group/default-k8s-different-port === RUN TestStartStop/group/default-k8s-different-port/serial === RUN TestStartStop/group/default-k8s-different-port/serial/FirstStart start_stop_delete_test.go:158: (dbg) Run: out/minikube-linux-amd64 start -p default-k8s-different-port-20210507222942-391940 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.2 === CONT TestStartStop/group/old-k8s-version/serial/SecondStart start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210507222527-391940 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.14.0: (1m57.644797622s) start_stop_delete_test.go:209: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210507222527-391940 -n old-k8s-version-20210507222527-391940 === RUN TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop start_stop_delete_test.go:221: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ... helpers_test.go:335: "kubernetes-dashboard-5d8978d65d-d6t7m" [c6790491-af83-11eb-92ed-0242c0a83a02] Running start_stop_delete_test.go:221: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011287143s === RUN TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop start_stop_delete_test.go:232: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ... helpers_test.go:335: "kubernetes-dashboard-5d8978d65d-d6t7m" [c6790491-af83-11eb-92ed-0242c0a83a02] Running start_stop_delete_test.go:232: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.026070321s === RUN TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages start_stop_delete_test.go:240: (dbg) Run: out/minikube-linux-amd64 ssh -p old-k8s-version-20210507222527-391940 "sudo crictl images -o json" start_stop_delete_test.go:240: (dbg) Done: out/minikube-linux-amd64 ssh -p old-k8s-version-20210507222527-391940 "sudo crictl images -o json": (1.345261103s) start_stop_delete_test.go:240: Found non-minikube image: kindest/kindnetd:v20210220-5b7e6d01 start_stop_delete_test.go:240: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5 start_stop_delete_test.go:240: Found non-minikube image: library/busybox:1.28.4-glibc start_stop_delete_test.go:240: Found non-minikube image: library/minikube-local-cache-test:functional-20210507215728-391940 === RUN TestStartStop/group/old-k8s-version/serial/Pause start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 pause -p old-k8s-version-20210507222527-391940 --alsologtostderr -v=1 start_stop_delete_test.go:247: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-20210507222527-391940 --alsologtostderr -v=1: (2.530097172s) start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210507222527-391940 -n old-k8s-version-20210507222527-391940 start_stop_delete_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210507222527-391940 -n old-k8s-version-20210507222527-391940: exit status 2 (331.098195ms) -- stdout -- Paused -- /stdout -- start_stop_delete_test.go:247: status error: exit status 2 (may be ok) start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210507222527-391940 -n old-k8s-version-20210507222527-391940 start_stop_delete_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210507222527-391940 -n old-k8s-version-20210507222527-391940: exit status 2 (316.886642ms) -- stdout -- Stopped -- /stdout -- start_stop_delete_test.go:247: status error: exit status 2 (may be ok) start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 unpause -p old-k8s-version-20210507222527-391940 --alsologtostderr -v=1 start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210507222527-391940 -n old-k8s-version-20210507222527-391940 start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210507222527-391940 -n old-k8s-version-20210507222527-391940 === CONT TestStartStop/group/old-k8s-version/serial start_stop_delete_test.go:134: (dbg) Run: out/minikube-linux-amd64 delete -p old-k8s-version-20210507222527-391940 start_stop_delete_test.go:134: (dbg) Done: out/minikube-linux-amd64 delete -p old-k8s-version-20210507222527-391940: (3.100559411s) start_stop_delete_test.go:139: (dbg) Run: kubectl config get-contexts old-k8s-version-20210507222527-391940 start_stop_delete_test.go:139: (dbg) Non-zero exit: kubectl config get-contexts old-k8s-version-20210507222527-391940: exit status 1 (43.372041ms) -- stdout -- CURRENT NAME CLUSTER AUTHINFO NAMESPACE -- /stdout -- ** stderr ** error: context old-k8s-version-20210507222527-391940 not found ** /stderr ** start_stop_delete_test.go:141: config context error: exit status 1 (may be ok) === CONT TestStartStop/group/old-k8s-version helpers_test.go:171: Cleaning up "old-k8s-version-20210507222527-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p old-k8s-version-20210507222527-391940 === CONT TestStartStop/group/newest-cni === RUN TestStartStop/group/newest-cni/serial === RUN TestStartStop/group/newest-cni/serial/FirstStart start_stop_delete_test.go:158: (dbg) Run: out/minikube-linux-amd64 start -p newest-cni-20210507223028-391940 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1 === CONT TestStartStop/group/embed-certs/serial/FirstStart start_stop_delete_test.go:158: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210507222849-391940 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.2: (2m13.811406832s) === RUN TestStartStop/group/embed-certs/serial/DeployApp start_stop_delete_test.go:168: (dbg) Run: kubectl --context embed-certs-20210507222849-391940 create -f testdata/busybox.yaml start_stop_delete_test.go:168: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ... helpers_test.go:335: "busybox" [c9532122-ce09-4f38-9c26-94f818051021] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox]) helpers_test.go:335: "busybox" [c9532122-ce09-4f38-9c26-94f818051021] Running start_stop_delete_test.go:168: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.011548682s start_stop_delete_test.go:168: (dbg) Run: kubectl --context embed-certs-20210507222849-391940 exec busybox -- /bin/sh -c "ulimit -n" === RUN TestStartStop/group/embed-certs/serial/Stop start_stop_delete_test.go:175: (dbg) Run: out/minikube-linux-amd64 stop -p embed-certs-20210507222849-391940 --alsologtostderr -v=3 E0507 22:31:20.822168 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory start_stop_delete_test.go:175: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20210507222849-391940 --alsologtostderr -v=3: (20.936915412s) === RUN TestStartStop/group/embed-certs/serial/EnableAddonAfterStop start_stop_delete_test.go:186: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210507222849-391940 -n embed-certs-20210507222849-391940 start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210507222849-391940 -n embed-certs-20210507222849-391940: exit status 7 (109.464323ms) -- stdout -- Stopped -- /stdout -- start_stop_delete_test.go:186: status error: exit status 7 (may be ok) start_stop_delete_test.go:193: (dbg) Run: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20210507222849-391940 === RUN TestStartStop/group/embed-certs/serial/SecondStart start_stop_delete_test.go:203: (dbg) Run: out/minikube-linux-amd64 start -p embed-certs-20210507222849-391940 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.2 === CONT TestStartStop/group/newest-cni/serial/FirstStart start_stop_delete_test.go:158: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210507223028-391940 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1: (1m5.868099149s) === RUN TestStartStop/group/newest-cni/serial/DeployApp === RUN TestStartStop/group/newest-cni/serial/Stop start_stop_delete_test.go:175: (dbg) Run: out/minikube-linux-amd64 stop -p newest-cni-20210507223028-391940 --alsologtostderr -v=3 start_stop_delete_test.go:175: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20210507223028-391940 --alsologtostderr -v=3: (1.358286653s) === RUN TestStartStop/group/newest-cni/serial/EnableAddonAfterStop start_stop_delete_test.go:186: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210507223028-391940 -n newest-cni-20210507223028-391940 start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210507223028-391940 -n newest-cni-20210507223028-391940: exit status 7 (98.711635ms) -- stdout -- Stopped -- /stdout -- start_stop_delete_test.go:186: status error: exit status 7 (may be ok) start_stop_delete_test.go:193: (dbg) Run: out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20210507223028-391940 === RUN TestStartStop/group/newest-cni/serial/SecondStart start_stop_delete_test.go:203: (dbg) Run: out/minikube-linux-amd64 start -p newest-cni-20210507223028-391940 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1 E0507 22:31:52.730368 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory E0507 22:31:52.735623 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory E0507 22:31:52.745839 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory E0507 22:31:52.766074 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory E0507 22:31:52.806413 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory E0507 22:31:52.886522 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory E0507 22:31:53.046959 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory E0507 22:31:53.367809 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory E0507 22:31:54.008739 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory E0507 22:31:55.289559 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory E0507 22:31:57.850314 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory E0507 22:31:59.411350 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory E0507 22:32:02.970949 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory === CONT TestStartStop/group/default-k8s-different-port/serial/FirstStart start_stop_delete_test.go:158: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210507222942-391940 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.2: (2m26.989220863s) === RUN TestStartStop/group/default-k8s-different-port/serial/DeployApp start_stop_delete_test.go:168: (dbg) Run: kubectl --context default-k8s-different-port-20210507222942-391940 create -f testdata/busybox.yaml start_stop_delete_test.go:168: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ... helpers_test.go:335: "busybox" [842be73e-72c2-4c06-b0f7-da4ebb46b202] Pending helpers_test.go:335: "busybox" [842be73e-72c2-4c06-b0f7-da4ebb46b202] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox]) helpers_test.go:335: "busybox" [842be73e-72c2-4c06-b0f7-da4ebb46b202] Running E0507 22:32:13.211870 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory start_stop_delete_test.go:168: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 9.014159682s start_stop_delete_test.go:168: (dbg) Run: kubectl --context default-k8s-different-port-20210507222942-391940 exec busybox -- /bin/sh -c "ulimit -n" === RUN TestStartStop/group/default-k8s-different-port/serial/Stop start_stop_delete_test.go:175: (dbg) Run: out/minikube-linux-amd64 stop -p default-k8s-different-port-20210507222942-391940 --alsologtostderr -v=3 E0507 22:32:33.692276 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory E0507 22:32:40.848187 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory E0507 22:32:40.853432 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory E0507 22:32:40.863672 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory E0507 22:32:40.883884 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory E0507 22:32:40.924291 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory E0507 22:32:41.004569 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory E0507 22:32:41.164988 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory E0507 22:32:41.486008 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory E0507 22:32:42.126932 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory E0507 22:32:43.407665 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory === CONT TestStartStop/group/newest-cni/serial/SecondStart start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210507223028-391940 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1: (1m7.448229702s) start_stop_delete_test.go:209: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210507223028-391940 -n newest-cni-20210507223028-391940 === CONT TestStartStop/group/default-k8s-different-port/serial/Stop start_stop_delete_test.go:175: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20210507222942-391940 --alsologtostderr -v=3: (25.247040585s) === RUN TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop start_stop_delete_test.go:186: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210507222942-391940 -n default-k8s-different-port-20210507222942-391940 start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210507222942-391940 -n default-k8s-different-port-20210507222942-391940: exit status 7 (103.75617ms) -- stdout -- Stopped -- /stdout -- start_stop_delete_test.go:186: status error: exit status 7 (may be ok) start_stop_delete_test.go:193: (dbg) Run: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20210507222942-391940 === RUN TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop start_stop_delete_test.go:220: WARNING: cni mode requires additional setup before pods can schedule :( === RUN TestStartStop/group/newest-cni/serial/AddonExistsAfterStop start_stop_delete_test.go:231: WARNING: cni mode requires additional setup before pods can schedule :( === RUN TestStartStop/group/newest-cni/serial/VerifyKubernetesImages start_stop_delete_test.go:240: (dbg) Run: out/minikube-linux-amd64 ssh -p newest-cni-20210507223028-391940 "sudo crictl images -o json" === RUN TestStartStop/group/default-k8s-different-port/serial/SecondStart start_stop_delete_test.go:203: (dbg) Run: out/minikube-linux-amd64 start -p default-k8s-different-port-20210507222942-391940 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.2 === CONT TestStartStop/group/newest-cni/serial/VerifyKubernetesImages start_stop_delete_test.go:240: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5 start_stop_delete_test.go:240: Found non-minikube image: library/minikube-local-cache-test:functional-20210507215728-391940 === RUN TestStartStop/group/newest-cni/serial/Pause start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 pause -p newest-cni-20210507223028-391940 --alsologtostderr -v=1 start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210507223028-391940 -n newest-cni-20210507223028-391940 start_stop_delete_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210507223028-391940 -n newest-cni-20210507223028-391940: exit status 2 (308.942141ms) -- stdout -- Paused -- /stdout -- start_stop_delete_test.go:247: status error: exit status 2 (may be ok) start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210507223028-391940 -n newest-cni-20210507223028-391940 start_stop_delete_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210507223028-391940 -n newest-cni-20210507223028-391940: exit status 2 (322.207922ms) -- stdout -- Stopped -- /stdout -- start_stop_delete_test.go:247: status error: exit status 2 (may be ok) start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 unpause -p newest-cni-20210507223028-391940 --alsologtostderr -v=1 start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210507223028-391940 -n newest-cni-20210507223028-391940 E0507 22:32:45.968826 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210507223028-391940 -n newest-cni-20210507223028-391940 === CONT TestStartStop/group/newest-cni/serial start_stop_delete_test.go:134: (dbg) Run: out/minikube-linux-amd64 delete -p newest-cni-20210507223028-391940 start_stop_delete_test.go:134: (dbg) Done: out/minikube-linux-amd64 delete -p newest-cni-20210507223028-391940: (2.964045286s) start_stop_delete_test.go:139: (dbg) Run: kubectl config get-contexts newest-cni-20210507223028-391940 start_stop_delete_test.go:139: (dbg) Non-zero exit: kubectl config get-contexts newest-cni-20210507223028-391940: exit status 1 (43.641956ms) -- stdout -- CURRENT NAME CLUSTER AUTHINFO NAMESPACE -- /stdout -- ** stderr ** error: context newest-cni-20210507223028-391940 not found ** /stderr ** start_stop_delete_test.go:141: config context error: exit status 1 (may be ok) === CONT TestStartStop/group/newest-cni helpers_test.go:171: Cleaning up "newest-cni-20210507223028-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p newest-cni-20210507223028-391940 === CONT TestNetworkPlugins/group/auto === RUN TestNetworkPlugins/group/auto/Start net_test.go:83: (dbg) Run: out/minikube-linux-amd64 start -p auto-20210507223250-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker --container-runtime=containerd E0507 22:32:51.089578 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory E0507 22:33:01.330221 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory E0507 22:33:14.653202 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory E0507 22:33:17.777338 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory E0507 22:33:21.811302 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory === CONT TestStartStop/group/embed-certs/serial/SecondStart start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210507222849-391940 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.2: (1m50.086277809s) start_stop_delete_test.go:209: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210507222849-391940 -n embed-certs-20210507222849-391940 === RUN TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop start_stop_delete_test.go:221: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ... helpers_test.go:335: "kubernetes-dashboard-968bcb79-zh9tm" [9acafcb9-0b99-4924-8254-b59f0d45eb5c] Running start_stop_delete_test.go:221: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.713396588s === RUN TestStartStop/group/embed-certs/serial/AddonExistsAfterStop start_stop_delete_test.go:232: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ... helpers_test.go:335: "kubernetes-dashboard-968bcb79-zh9tm" [9acafcb9-0b99-4924-8254-b59f0d45eb5c] Running start_stop_delete_test.go:232: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.307015451s === RUN TestStartStop/group/embed-certs/serial/VerifyKubernetesImages start_stop_delete_test.go:240: (dbg) Run: out/minikube-linux-amd64 ssh -p embed-certs-20210507222849-391940 "sudo crictl images -o json" start_stop_delete_test.go:240: Found non-minikube image: kindest/kindnetd:v20210220-5b7e6d01 start_stop_delete_test.go:240: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5 start_stop_delete_test.go:240: Found non-minikube image: library/busybox:1.28.4-glibc start_stop_delete_test.go:240: Found non-minikube image: library/minikube-local-cache-test:functional-20210507215728-391940 === RUN TestStartStop/group/embed-certs/serial/Pause start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 pause -p embed-certs-20210507222849-391940 --alsologtostderr -v=1 start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210507222849-391940 -n embed-certs-20210507222849-391940 start_stop_delete_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210507222849-391940 -n embed-certs-20210507222849-391940: exit status 2 (320.16991ms) -- stdout -- Paused -- /stdout -- start_stop_delete_test.go:247: status error: exit status 2 (may be ok) start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210507222849-391940 -n embed-certs-20210507222849-391940 start_stop_delete_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210507222849-391940 -n embed-certs-20210507222849-391940: exit status 2 (319.695146ms) -- stdout -- Stopped -- /stdout -- start_stop_delete_test.go:247: status error: exit status 2 (may be ok) start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 unpause -p embed-certs-20210507222849-391940 --alsologtostderr -v=1 start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210507222849-391940 -n embed-certs-20210507222849-391940 start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210507222849-391940 -n embed-certs-20210507222849-391940 === CONT TestStartStop/group/embed-certs/serial start_stop_delete_test.go:134: (dbg) Run: out/minikube-linux-amd64 delete -p embed-certs-20210507222849-391940 start_stop_delete_test.go:134: (dbg) Done: out/minikube-linux-amd64 delete -p embed-certs-20210507222849-391940: (3.122778179s) start_stop_delete_test.go:139: (dbg) Run: kubectl config get-contexts embed-certs-20210507222849-391940 start_stop_delete_test.go:139: (dbg) Non-zero exit: kubectl config get-contexts embed-certs-20210507222849-391940: exit status 1 (53.671431ms) -- stdout -- CURRENT NAME CLUSTER AUTHINFO NAMESPACE -- /stdout -- ** stderr ** error: context embed-certs-20210507222849-391940 not found ** /stderr ** start_stop_delete_test.go:141: config context error: exit status 1 (may be ok) === CONT TestStartStop/group/embed-certs helpers_test.go:171: Cleaning up "embed-certs-20210507222849-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p embed-certs-20210507222849-391940 === CONT TestNetworkPlugins/group/false === RUN TestNetworkPlugins/group/false/Start net_test.go:83: (dbg) Run: out/minikube-linux-amd64 start -p false-20210507223341-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker --container-runtime=containerd E0507 22:34:02.771747 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory E0507 22:34:36.574324 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory === CONT TestStartStop/group/default-k8s-different-port/serial/SecondStart start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210507222942-391940 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.2: (1m53.935197532s) start_stop_delete_test.go:209: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210507222942-391940 -n default-k8s-different-port-20210507222942-391940 === RUN TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop start_stop_delete_test.go:221: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ... helpers_test.go:335: "kubernetes-dashboard-968bcb79-89qpr" [34855ac4-b932-4d8c-8212-1ea9e04fbfd7] Running start_stop_delete_test.go:221: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011593109s === RUN TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop start_stop_delete_test.go:232: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ... helpers_test.go:335: "kubernetes-dashboard-968bcb79-89qpr" [34855ac4-b932-4d8c-8212-1ea9e04fbfd7] Running start_stop_delete_test.go:232: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005545987s === RUN TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages start_stop_delete_test.go:240: (dbg) Run: out/minikube-linux-amd64 ssh -p default-k8s-different-port-20210507222942-391940 "sudo crictl images -o json" start_stop_delete_test.go:240: Found non-minikube image: kindest/kindnetd:v20210220-5b7e6d01 start_stop_delete_test.go:240: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5 start_stop_delete_test.go:240: Found non-minikube image: library/busybox:1.28.4-glibc start_stop_delete_test.go:240: Found non-minikube image: library/minikube-local-cache-test:functional-20210507215728-391940 === RUN TestStartStop/group/default-k8s-different-port/serial/Pause start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 pause -p default-k8s-different-port-20210507222942-391940 --alsologtostderr -v=1 start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210507222942-391940 -n default-k8s-different-port-20210507222942-391940 start_stop_delete_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210507222942-391940 -n default-k8s-different-port-20210507222942-391940: exit status 2 (315.504714ms) -- stdout -- Paused -- /stdout -- start_stop_delete_test.go:247: status error: exit status 2 (may be ok) start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210507222942-391940 -n default-k8s-different-port-20210507222942-391940 start_stop_delete_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210507222942-391940 -n default-k8s-different-port-20210507222942-391940: exit status 2 (337.534428ms) -- stdout -- Stopped -- /stdout -- start_stop_delete_test.go:247: status error: exit status 2 (may be ok) start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 unpause -p default-k8s-different-port-20210507222942-391940 --alsologtostderr -v=1 start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210507222942-391940 -n default-k8s-different-port-20210507222942-391940 start_stop_delete_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210507222942-391940 -n default-k8s-different-port-20210507222942-391940 === CONT TestStartStop/group/default-k8s-different-port/serial start_stop_delete_test.go:134: (dbg) Run: out/minikube-linux-amd64 delete -p default-k8s-different-port-20210507222942-391940 start_stop_delete_test.go:134: (dbg) Done: out/minikube-linux-amd64 delete -p default-k8s-different-port-20210507222942-391940: (3.190600277s) start_stop_delete_test.go:139: (dbg) Run: kubectl config get-contexts default-k8s-different-port-20210507222942-391940 start_stop_delete_test.go:139: (dbg) Non-zero exit: kubectl config get-contexts default-k8s-different-port-20210507222942-391940: exit status 1 (43.262892ms) -- stdout -- CURRENT NAME CLUSTER AUTHINFO NAMESPACE -- /stdout -- ** stderr ** error: context default-k8s-different-port-20210507222942-391940 not found ** /stderr ** start_stop_delete_test.go:141: config context error: exit status 1 (may be ok) === CONT TestStartStop/group/default-k8s-different-port helpers_test.go:171: Cleaning up "default-k8s-different-port-20210507222942-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p default-k8s-different-port-20210507222942-391940 === CONT TestNetworkPlugins/group/cilium === RUN TestNetworkPlugins/group/cilium/Start net_test.go:83: (dbg) Run: out/minikube-linux-amd64 start -p cilium-20210507223455-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker --container-runtime=containerd === CONT TestNetworkPlugins/group/auto/Start net_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p auto-20210507223250-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker --container-runtime=containerd: (2m27.966265792s) === RUN TestNetworkPlugins/group/auto/KubeletFlags net_test.go:99: (dbg) Run: out/minikube-linux-amd64 ssh -p auto-20210507223250-391940 "pgrep -a kubelet" === RUN TestNetworkPlugins/group/auto/NetCatPod net_test.go:113: (dbg) Run: kubectl --context auto-20210507223250-391940 replace --force -f testdata/netcat-deployment.yaml net_test.go:127: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ... helpers_test.go:335: "netcat-66fbc655d5-pf5zj" [5f4d13ec-95dd-4fdf-bf14-6173d8bbb162] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils]) helpers_test.go:335: "netcat-66fbc655d5-pf5zj" [5f4d13ec-95dd-4fdf-bf14-6173d8bbb162] Running E0507 22:35:24.692589 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory net_test.go:127: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005608191s === RUN TestNetworkPlugins/group/auto/DNS net_test.go:144: (dbg) Run: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default net_test.go:144: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (58.129949ms) ** stderr ** Error from server (NotFound): the server could not find the requested resource ** /stderr ** net_test.go:144: (dbg) Run: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default net_test.go:144: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (55.088308ms) ** stderr ** Error from server (NotFound): the server could not find the requested resource ** /stderr ** net_test.go:144: (dbg) Run: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default net_test.go:144: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (55.812904ms) ** stderr ** Error from server (NotFound): the server could not find the requested resource ** /stderr ** === CONT TestPause/serial/DeletePaused pause_test.go:129: (dbg) Non-zero exit: out/minikube-linux-amd64 delete -p pause-20210507222034-391940 --alsologtostderr -v=5: signal: killed (11m46.054709776s) -- stdout -- * Deleting "pause-20210507222034-391940" in docker ... * Deleting container "pause-20210507222034-391940" ... * Stopping node "pause-20210507222034-391940" ... * Powering off "pause-20210507222034-391940" via SSH ... -- /stdout -- ** stderr ** I0507 22:23:48.105410 562635 out.go:291] Setting OutFile to fd 1 ... I0507 22:23:48.105672 562635 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:23:48.105687 562635 out.go:304] Setting ErrFile to fd 2... I0507 22:23:48.105693 562635 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:23:48.105856 562635 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/bin I0507 22:23:48.106205 562635 cli_runner.go:115] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io --format {{.Names}} I0507 22:23:48.159601 562635 delete.go:210] DeleteProfiles I0507 22:23:48.159630 562635 delete.go:233] Deleting pause-20210507222034-391940 I0507 22:23:48.159641 562635 delete.go:238] pause-20210507222034-391940 configuration: &{Name:pause-20210507222034-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:pause-20210507222034-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0507 22:23:48.161704 562635 out.go:170] * Deleting "pause-20210507222034-391940" in docker ... I0507 22:23:48.161785 562635 delete.go:48] deleting possible leftovers for pause-20210507222034-391940 (driver=docker) ... I0507 22:23:48.161838 562635 cli_runner.go:115] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io=pause-20210507222034-391940 --format {{.Names}} I0507 22:23:48.218269 562635 out.go:170] * Deleting container "pause-20210507222034-391940" ... I0507 22:23:48.218360 562635 cli_runner.go:115] Run: docker container inspect pause-20210507222034-391940 --format={{.State.Status}} I0507 22:23:48.276929 562635 cli_runner.go:115] Run: docker exec --privileged -t pause-20210507222034-391940 /bin/bash -c "sudo init 0" I0507 22:23:49.466752 562635 cli_runner.go:115] Run: docker container inspect pause-20210507222034-391940 --format={{.State.Status}} I0507 22:23:49.516955 562635 oci.go:646] temporary error: container pause-20210507222034-391940 status is Running but expect it to be exited I0507 22:23:49.517002 562635 oci.go:652] Successfully shutdown container pause-20210507222034-391940 I0507 22:23:49.517057 562635 cli_runner.go:115] Run: docker rm -f -v pause-20210507222034-391940 W0507 22:28:48.162402 562635 cli_runner.go:162] docker rm -f -v pause-20210507222034-391940 returned with exit code -1 I0507 22:28:48.162431 562635 cli_runner.go:168] Completed: docker rm -f -v pause-20210507222034-391940: (4m58.645303478s) E0507 22:28:48.162502 562635 delete.go:56] error deleting container "pause-20210507222034-391940". You may want to delete it manually : delete pause-20210507222034-391940: docker rm -f -v pause-20210507222034-391940: signal: killed stdout: stderr: I0507 22:28:48.162528 562635 volumes.go:79] trying to delete all docker volumes with label name.minikube.sigs.k8s.io=pause-20210507222034-391940 I0507 22:28:48.162641 562635 cli_runner.go:115] Run: docker volume ls --filter label=name.minikube.sigs.k8s.io=pause-20210507222034-391940 --format {{.Name}} I0507 22:28:48.203899 562635 cli_runner.go:115] Run: docker volume rm --force pause-20210507222034-391940 W0507 22:28:48.203949 562635 delete.go:64] error deleting volumes (might be okay). To see the list of volumes run: 'docker volume ls' :[deleting "pause-20210507222034-391940"] I0507 22:28:48.203987 562635 cli_runner.go:115] Run: docker network ls --filter=label=created_by.minikube.sigs.k8s.io --format {{.Name}} I0507 22:28:48.244509 562635 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210507222527-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0507 22:28:48.282310 562635 cli_runner.go:115] Run: docker network rm old-k8s-version-20210507222527-391940 W0507 22:28:48.321852 562635 cli_runner.go:162] docker network rm old-k8s-version-20210507222527-391940 returned with exit code 1 I0507 22:28:48.321967 562635 cli_runner.go:115] Run: docker network inspect pause-20210507211419-97507 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0507 22:28:48.365003 562635 cli_runner.go:115] Run: docker network rm pause-20210507211419-97507 W0507 22:28:48.405578 562635 cli_runner.go:162] docker network rm pause-20210507211419-97507 returned with exit code 1 I0507 22:28:48.405697 562635 cli_runner.go:115] Run: docker network inspect pause-20210507222034-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0507 22:28:48.445560 562635 cli_runner.go:115] Run: docker network rm pause-20210507222034-391940 W0507 22:28:48.485123 562635 cli_runner.go:162] docker network rm pause-20210507222034-391940 returned with exit code 1 W0507 22:28:48.485178 562635 delete.go:69] error deleting leftover networks (might be okay). To see the list of networks: 'docker network ls' :[unable to delete a network that is attached to a running container unable to delete a network that is attached to a running container unable to delete a network that is attached to a running container] I0507 22:28:48.485195 562635 volumes.go:101] trying to prune all docker volumes with label name.minikube.sigs.k8s.io=pause-20210507222034-391940 I0507 22:28:48.485233 562635 cli_runner.go:115] Run: docker volume prune -f --filter label=name.minikube.sigs.k8s.io=pause-20210507222034-391940 W0507 22:28:48.485257 562635 delete.go:79] error pruning volume (might be okay): [prune volume by label name.minikube.sigs.k8s.io=pause-20210507222034-391940: docker volume prune -f --filter label=name.minikube.sigs.k8s.io=pause-20210507222034-391940: context deadline exceeded stdout: stderr: ] I0507 22:28:48.485979 562635 cli_runner.go:115] Run: docker container inspect pause-20210507222034-391940 --format={{.State.Status}} I0507 22:28:48.526544 562635 stop.go:39] StopHost: pause-20210507222034-391940 I0507 22:28:48.540786 562635 out.go:170] * Stopping node "pause-20210507222034-391940" ... I0507 22:28:48.540872 562635 cli_runner.go:115] Run: docker container inspect pause-20210507222034-391940 --format={{.State.Status}} W0507 22:28:48.589739 562635 register.go:129] "PowerOff" was not found within the registered steps for "Deleting": [Deleting Stopping Deleting Done] I0507 22:28:48.591837 562635 out.go:170] * Powering off "pause-20210507222034-391940" via SSH ... I0507 22:28:48.591904 562635 cli_runner.go:115] Run: docker exec --privileged -t pause-20210507222034-391940 /bin/bash -c "sudo init 0" W0507 22:28:48.692241 562635 cli_runner.go:162] docker exec --privileged -t pause-20210507222034-391940 /bin/bash -c "sudo init 0" returned with exit code 126 I0507 22:28:48.692277 562635 oci.go:632] error shutdown pause-20210507222034-391940: docker exec --privileged -t pause-20210507222034-391940 /bin/bash -c "sudo init 0": exit status 126 stdout: OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: process_linux.go:103: executing setns process caused: exit status 1: unknown stderr: I0507 22:28:49.692729 562635 cli_runner.go:115] Run: docker container inspect pause-20210507222034-391940 --format={{.State.Status}} I0507 22:28:49.736272 562635 oci.go:646] temporary error: container pause-20210507222034-391940 status is Running but expect it to be exited I0507 22:28:49.736296 562635 oci.go:652] Successfully shutdown container pause-20210507222034-391940 I0507 22:28:49.736303 562635 stop.go:88] shutdown container: err= I0507 22:28:49.736353 562635 main.go:128] libmachine: Stopping "pause-20210507222034-391940"... I0507 22:28:49.736407 562635 cli_runner.go:115] Run: docker container inspect pause-20210507222034-391940 --format={{.State.Status}} I0507 22:28:49.779223 562635 kic_runner.go:94] Run: systemctl --version I0507 22:28:49.779251 562635 kic_runner.go:115] Args: [docker exec --privileged pause-20210507222034-391940 systemctl --version] I0507 22:28:49.866908 562635 kic_runner.go:94] Run: sudo service kubelet stop I0507 22:28:49.866929 562635 kic_runner.go:115] Args: [docker exec --privileged pause-20210507222034-391940 sudo service kubelet stop] I0507 22:28:49.958245 562635 openrc.go:161] stop output: -- stdout -- OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: process_linux.go:103: executing setns process caused: exit status 1: unknown -- /stdout -- W0507 22:28:49.958273 562635 kic.go:437] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 126 stdout: OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: process_linux.go:103: executing setns process caused: exit status 1: unknown stderr: I0507 22:28:49.958336 562635 kic_runner.go:94] Run: sudo service kubelet stop I0507 22:28:49.958353 562635 kic_runner.go:115] Args: [docker exec --privileged pause-20210507222034-391940 sudo service kubelet stop] I0507 22:28:50.076864 562635 openrc.go:161] stop output: -- stdout -- OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: process_linux.go:103: executing setns process caused: exit status 1: unknown -- /stdout -- W0507 22:28:50.076887 562635 kic.go:439] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 126 stdout: OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: process_linux.go:103: executing setns process caused: exit status 1: unknown stderr: I0507 22:28:50.076906 562635 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]} I0507 22:28:50.076991 562635 kic_runner.go:94] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator" I0507 22:28:50.077000 562635 kic_runner.go:115] Args: [docker exec --privileged pause-20210507222034-391940 sudo -s eval crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator] I0507 22:28:50.181313 562635 kic.go:450] unable list containers : crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": exit status 126 stdout: OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: process_linux.go:103: executing setns process caused: exit status 1: unknown stderr: I0507 22:28:50.181339 562635 kic.go:460] successfully stopped kubernetes! I0507 22:28:50.181424 562635 kic_runner.go:94] Run: pgrep kube-apiserver I0507 22:28:50.181434 562635 kic_runner.go:115] Args: [docker exec --privileged pause-20210507222034-391940 pgrep kube-apiserver] ** /stderr ** pause_test.go:131: failed to delete minikube with args: "out/minikube-linux-amd64 delete -p pause-20210507222034-391940 --alsologtostderr -v=5" : signal: killed helpers_test.go:218: -----------------------post-mortem-------------------------------- helpers_test.go:226: ======> post-mortem[TestPause/serial/DeletePaused]: docker inspect <====== helpers_test.go:227: (dbg) Run: docker inspect pause-20210507222034-391940 helpers_test.go:231: (dbg) docker inspect pause-20210507222034-391940: -- stdout -- [ { "Id": "1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3", "Created": "2021-05-07T22:20:36.202407354Z", "Path": "/usr/local/bin/entrypoint", "Args": [ "/sbin/init" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 533530, "ExitCode": 0, "Error": "", "StartedAt": "2021-05-07T22:20:37.143574617Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:bcd131522525c9c3b8695a8d144be8d177bcd5614ec5397f188115d3be0bbc24", "ResolvConfPath": "/var/lib/docker/containers/1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3/resolv.conf", "HostnamePath": "/var/lib/docker/containers/1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3/hostname", "HostsPath": "/var/lib/docker/containers/1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3/hosts", "LogPath": "/var/lib/docker/containers/1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3/1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3-json.log", "Name": "/pause-20210507222034-391940", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "", "ExecIDs": null, "HostConfig": { "Binds": [ "/lib/modules:/lib/modules:ro", "pause-20210507222034-391940:/var" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "pause-20210507222034-391940", "PortBindings": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ] }, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "Capabilities": null, "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": true, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "seccomp=unconfined", "apparmor=unconfined", "label=disable" ], "Tmpfs": { "/run": "", "/tmp": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 2000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": null, "ReadonlyPaths": null }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/0bede56c504f60cf99391fc18c2ba58383dfabf35052e1094a349da2bb9cd8a7-init/diff:/var/lib/docker/overlay2/1e5fa0ed3c3f4bec9b97cabd8aaa709f5915b54c42d527ba46e8ffa9ebcb7f9a/diff:/var/lib/docker/overlay2/00098e5ff94787f022c282488f937bf3694bcc2f80e6f324f2cb94189fadc609/diff:/var/lib/docker/overlay2/0751219afdacf9c8a75fced952b1ad013a8d5b6fbee07adc96e9f305877d0131/diff:/var/lib/docker/overlay2/4fed3d3ec94e4b275966ac815cabeee3572325ca655dcb69e8d31d2051468a10/diff:/var/lib/docker/overlay2/a78b251d86ddd3460876cbc21fef7421c2e76ba3f3198b79f3af7fe8092297f6/diff:/var/lib/docker/overlay2/f3609509e8e931753320e2da77988a3cdd78a58c167b428b96a3aa29971edb5e/diff:/var/lib/docker/overlay2/ebeb53c34330c6713e55bb0d98076f6618884e3bdcd6b888ad1965c69f65b14d/diff:/var/lib/docker/overlay2/1efdecf3c4a2226dd59cc51906581e2326beec3a6b7090c09e437b80c90794b0/diff:/var/lib/docker/overlay2/4c7309d0146fa644c2eb195cb344f6b10894237fb65248ee8391d1790ac7f765/diff:/var/lib/docker/overlay2/424a19d5d18bedf5b29c5b9ffd2c72e8c9e112f2fd414acd046bfa963d0526c7/diff:/var/lib/docker/overlay2/1846dd5e13995c56277d370ac401df36ad796851e8f2315dfab9ff02f487b8fc/diff:/var/lib/docker/overlay2/9393786bec1ad7d470bbbb5c7a94ec2131900fa0c6d2ad39b1039fc6795a2683/diff:/var/lib/docker/overlay2/708ff6a0ffe352ea29dabc0c453ebb09ccede3e24ae9f3fb51e06680ed43e597/diff:/var/lib/docker/overlay2/5a536ba767666ddc007ad059bfa077204239088ff6093831b1b5a0aff36a88ea/diff:/var/lib/docker/overlay2/1d4b0ac5e44186da0f4ee859bb5c23df30087789d88e253dfd57e0ffb21bb88c/diff:/var/lib/docker/overlay2/2b67d6a3428317a2f483420befe919fd660743c5f1494d075867507afe929344/diff:/var/lib/docker/overlay2/abef0f23a7f068f22910d10fcf3ed65c4804f84a4a9aa126a6ac79666f87ab63/diff:/var/lib/docker/overlay2/ec0c450f32e0e573b78fc8537f87456c96a10f353e8bb6e28b4cde51d4b78237/diff:/var/lib/docker/overlay2/ba3b904a6ce3d016a1ef237a88f0e5d4d3b08a8c68e6e4c808b54ffb59e19ee3/diff:/var/lib/docker/overlay2/160d3a3a918b002bb27e1f108db150483cfb4c1383ab9bea5f7d5b983af0f57f/diff:/var/lib/docker/overlay2/ed771b935b96f93ce682cdd9d22155225a918436de84fb5d56eb6214e36d7e27/diff:/var/lib/docker/overlay2/a298f74d3f51b9716985e7c6a84a4fe16a9badceeb4fbcc5847e9313a496c203/diff:/var/lib/docker/overlay2/7f4ddade1e222fcfd5747b07b270a54575ecfdbdf23dc72c6aa8984cb14b4f6b/diff:/var/lib/docker/overlay2/8522467e2a2b9517f0e9fe828bf20d40830fb4364323ea1b17c1ae43e68f1633/diff:/var/lib/docker/overlay2/7b8ac1e2dcffd2cd29a0fe315f23ba717abac176d21484016b19e33e1ceb3f15/diff:/var/lib/docker/overlay2/219fbaff646669aefdda08db39e5c449632d42e036ba372e6fbfd2e74d05895c/diff:/var/lib/docker/overlay2/169017ab906e8cd6c768272fbbd27db4564b7ea84520773194f7b8d1c5725ce4/diff:/var/lib/docker/overlay2/3f2355256f7a67382c67f2079a79f9a3568cd4aac75dcb8e549d040ea3e3801c/diff:/var/lib/docker/overlay2/049eedb4ea37711e06782dfa1648c66d0e215e8b8eb540da6bd9b7729e88b4c6/diff:/var/lib/docker/overlay2/685ece42c012e8b988affc555e627ea46a42003f7fb6511dc68fb9da6c515fd8/diff:/var/lib/docker/overlay2/224f8f237d1ebeb57711074d5b9338b377abc164e67d85cd8b48264062798e8a/diff:/var/lib/docker/overlay2/280191c44865a7db266046c55f36cee27c985b893bca0a97310569a5df684c8a/diff:/var/lib/docker/overlay2/2a04e90c25bcb0264edd485b59f54c8e6c28a2d0c63f696590f1876b164e0ad8/diff:/var/lib/docker/overlay2/9c5536844b05a6fcc7c6de17ba2cd59669716e44474ac06421119d86c04f197e/diff:/var/lib/docker/overlay2/0db732ad07139625742260350f06f46f9978ae313af26f4afdab09884382542c/diff:/var/lib/docker/overlay2/d7e4510c4ab4dcfcd652b63a086da8e4f53866cf61cc72dfacd6e24a7ba895ac/diff", "MergedDir": "/var/lib/docker/overlay2/0bede56c504f60cf99391fc18c2ba58383dfabf35052e1094a349da2bb9cd8a7/merged", "UpperDir": "/var/lib/docker/overlay2/0bede56c504f60cf99391fc18c2ba58383dfabf35052e1094a349da2bb9cd8a7/diff", "WorkDir": "/var/lib/docker/overlay2/0bede56c504f60cf99391fc18c2ba58383dfabf35052e1094a349da2bb9cd8a7/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "bind", "Source": "/lib/modules", "Destination": "/lib/modules", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "volume", "Name": "pause-20210507222034-391940", "Source": "/var/lib/docker/volumes/pause-20210507222034-391940/_data", "Destination": "/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "pause-20210507222034-391940", "Domainname": "", "User": "root", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "22/tcp": {}, "2376/tcp": {}, "32443/tcp": {}, "5000/tcp": {}, "8443/tcp": {} }, "Tty": true, "OpenStdin": false, "StdinOnce": false, "Env": [ "container=docker", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": null, "Image": "gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/usr/local/bin/entrypoint", "/sbin/init" ], "OnBuild": null, "Labels": { "created_by.minikube.sigs.k8s.io": "true", "mode.minikube.sigs.k8s.io": "pause-20210507222034-391940", "name.minikube.sigs.k8s.io": "pause-20210507222034-391940", "role.minikube.sigs.k8s.io": "" }, "StopSignal": "SIGRTMIN+3" }, "NetworkSettings": { "Bridge": "", "SandboxID": "8fb257f657a455a5c61064a642d32b732e65559debbded043f37bf425b0822a7", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33197" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33196" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33191" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33195" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33193" } ] }, "SandboxKey": "/var/run/docker/netns/8fb257f657a4", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "pause-20210507222034-391940": { "IPAMConfig": { "IPv4Address": "192.168.85.2" }, "Links": null, "Aliases": [ "1f3a30720296" ], "NetworkID": "66090a2bc48e8a0ec3403c0dc0bc3b1b9148ac10b973fc1dc8134d7bbd25b00c", "EndpointID": "e54131d7b1bfd577149c0050cd0ed718fe2b0322b3e8ec4e35cfefd08f0113ea", "Gateway": "192.168.85.1", "IPAddress": "192.168.85.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:55:02", "DriverOpts": null } } } } ] -- /stdout -- helpers_test.go:235: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210507222034-391940 -n pause-20210507222034-391940 === CONT TestNetworkPlugins/group/auto/DNS net_test.go:144: (dbg) Run: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default net_test.go:144: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (58.640853ms) ** stderr ** Error from server (NotFound): the server could not find the requested resource ** /stderr ** === CONT TestPause/serial/DeletePaused helpers_test.go:235: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210507222034-391940 -n pause-20210507222034-391940: exit status 3 (2.452187525s) -- stdout -- Error -- /stdout -- ** stderr ** E0507 22:35:36.554832 643808 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:38830->127.0.0.1:33197: read: connection reset by peer E0507 22:35:36.554850 643808 status.go:247] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:38830->127.0.0.1:33197: read: connection reset by peer ** /stderr ** helpers_test.go:235: status error: exit status 3 (may be ok) helpers_test.go:237: "pause-20210507222034-391940" host is not running, skipping log retrieval (state="Error") helpers_test.go:218: -----------------------post-mortem-------------------------------- helpers_test.go:226: ======> post-mortem[TestPause/serial/DeletePaused]: docker inspect <====== helpers_test.go:227: (dbg) Run: docker inspect pause-20210507222034-391940 helpers_test.go:231: (dbg) docker inspect pause-20210507222034-391940: -- stdout -- [ { "Id": "1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3", "Created": "2021-05-07T22:20:36.202407354Z", "Path": "/usr/local/bin/entrypoint", "Args": [ "/sbin/init" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 533530, "ExitCode": 0, "Error": "", "StartedAt": "2021-05-07T22:20:37.143574617Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:bcd131522525c9c3b8695a8d144be8d177bcd5614ec5397f188115d3be0bbc24", "ResolvConfPath": "/var/lib/docker/containers/1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3/resolv.conf", "HostnamePath": "/var/lib/docker/containers/1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3/hostname", "HostsPath": "/var/lib/docker/containers/1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3/hosts", "LogPath": "/var/lib/docker/containers/1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3/1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3-json.log", "Name": "/pause-20210507222034-391940", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "", "ExecIDs": null, "HostConfig": { "Binds": [ "/lib/modules:/lib/modules:ro", "pause-20210507222034-391940:/var" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "pause-20210507222034-391940", "PortBindings": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ] }, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "Capabilities": null, "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": true, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "seccomp=unconfined", "apparmor=unconfined", "label=disable" ], "Tmpfs": { "/run": "", "/tmp": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 2000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": null, "ReadonlyPaths": null }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/0bede56c504f60cf99391fc18c2ba58383dfabf35052e1094a349da2bb9cd8a7-init/diff:/var/lib/docker/overlay2/1e5fa0ed3c3f4bec9b97cabd8aaa709f5915b54c42d527ba46e8ffa9ebcb7f9a/diff:/var/lib/docker/overlay2/00098e5ff94787f022c282488f937bf3694bcc2f80e6f324f2cb94189fadc609/diff:/var/lib/docker/overlay2/0751219afdacf9c8a75fced952b1ad013a8d5b6fbee07adc96e9f305877d0131/diff:/var/lib/docker/overlay2/4fed3d3ec94e4b275966ac815cabeee3572325ca655dcb69e8d31d2051468a10/diff:/var/lib/docker/overlay2/a78b251d86ddd3460876cbc21fef7421c2e76ba3f3198b79f3af7fe8092297f6/diff:/var/lib/docker/overlay2/f3609509e8e931753320e2da77988a3cdd78a58c167b428b96a3aa29971edb5e/diff:/var/lib/docker/overlay2/ebeb53c34330c6713e55bb0d98076f6618884e3bdcd6b888ad1965c69f65b14d/diff:/var/lib/docker/overlay2/1efdecf3c4a2226dd59cc51906581e2326beec3a6b7090c09e437b80c90794b0/diff:/var/lib/docker/overlay2/4c7309d0146fa644c2eb195cb344f6b10894237fb65248ee8391d1790ac7f765/diff:/var/lib/docker/overlay2/424a19d5d18bedf5b29c5b9ffd2c72e8c9e112f2fd414acd046bfa963d0526c7/diff:/var/lib/docker/overlay2/1846dd5e13995c56277d370ac401df36ad796851e8f2315dfab9ff02f487b8fc/diff:/var/lib/docker/overlay2/9393786bec1ad7d470bbbb5c7a94ec2131900fa0c6d2ad39b1039fc6795a2683/diff:/var/lib/docker/overlay2/708ff6a0ffe352ea29dabc0c453ebb09ccede3e24ae9f3fb51e06680ed43e597/diff:/var/lib/docker/overlay2/5a536ba767666ddc007ad059bfa077204239088ff6093831b1b5a0aff36a88ea/diff:/var/lib/docker/overlay2/1d4b0ac5e44186da0f4ee859bb5c23df30087789d88e253dfd57e0ffb21bb88c/diff:/var/lib/docker/overlay2/2b67d6a3428317a2f483420befe919fd660743c5f1494d075867507afe929344/diff:/var/lib/docker/overlay2/abef0f23a7f068f22910d10fcf3ed65c4804f84a4a9aa126a6ac79666f87ab63/diff:/var/lib/docker/overlay2/ec0c450f32e0e573b78fc8537f87456c96a10f353e8bb6e28b4cde51d4b78237/diff:/var/lib/docker/overlay2/ba3b904a6ce3d016a1ef237a88f0e5d4d3b08a8c68e6e4c808b54ffb59e19ee3/diff:/var/lib/docker/overlay2/160d3a3a918b002bb27e1f108db150483cfb4c1383ab9bea5f7d5b983af0f57f/diff:/var/lib/docker/overlay2/ed771b935b96f93ce682cdd9d22155225a918436de84fb5d56eb6214e36d7e27/diff:/var/lib/docker/overlay2/a298f74d3f51b9716985e7c6a84a4fe16a9badceeb4fbcc5847e9313a496c203/diff:/var/lib/docker/overlay2/7f4ddade1e222fcfd5747b07b270a54575ecfdbdf23dc72c6aa8984cb14b4f6b/diff:/var/lib/docker/overlay2/8522467e2a2b9517f0e9fe828bf20d40830fb4364323ea1b17c1ae43e68f1633/diff:/var/lib/docker/overlay2/7b8ac1e2dcffd2cd29a0fe315f23ba717abac176d21484016b19e33e1ceb3f15/diff:/var/lib/docker/overlay2/219fbaff646669aefdda08db39e5c449632d42e036ba372e6fbfd2e74d05895c/diff:/var/lib/docker/overlay2/169017ab906e8cd6c768272fbbd27db4564b7ea84520773194f7b8d1c5725ce4/diff:/var/lib/docker/overlay2/3f2355256f7a67382c67f2079a79f9a3568cd4aac75dcb8e549d040ea3e3801c/diff:/var/lib/docker/overlay2/049eedb4ea37711e06782dfa1648c66d0e215e8b8eb540da6bd9b7729e88b4c6/diff:/var/lib/docker/overlay2/685ece42c012e8b988affc555e627ea46a42003f7fb6511dc68fb9da6c515fd8/diff:/var/lib/docker/overlay2/224f8f237d1ebeb57711074d5b9338b377abc164e67d85cd8b48264062798e8a/diff:/var/lib/docker/overlay2/280191c44865a7db266046c55f36cee27c985b893bca0a97310569a5df684c8a/diff:/var/lib/docker/overlay2/2a04e90c25bcb0264edd485b59f54c8e6c28a2d0c63f696590f1876b164e0ad8/diff:/var/lib/docker/overlay2/9c5536844b05a6fcc7c6de17ba2cd59669716e44474ac06421119d86c04f197e/diff:/var/lib/docker/overlay2/0db732ad07139625742260350f06f46f9978ae313af26f4afdab09884382542c/diff:/var/lib/docker/overlay2/d7e4510c4ab4dcfcd652b63a086da8e4f53866cf61cc72dfacd6e24a7ba895ac/diff", "MergedDir": "/var/lib/docker/overlay2/0bede56c504f60cf99391fc18c2ba58383dfabf35052e1094a349da2bb9cd8a7/merged", "UpperDir": "/var/lib/docker/overlay2/0bede56c504f60cf99391fc18c2ba58383dfabf35052e1094a349da2bb9cd8a7/diff", "WorkDir": "/var/lib/docker/overlay2/0bede56c504f60cf99391fc18c2ba58383dfabf35052e1094a349da2bb9cd8a7/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "bind", "Source": "/lib/modules", "Destination": "/lib/modules", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "volume", "Name": "pause-20210507222034-391940", "Source": "/var/lib/docker/volumes/pause-20210507222034-391940/_data", "Destination": "/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "pause-20210507222034-391940", "Domainname": "", "User": "root", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "22/tcp": {}, "2376/tcp": {}, "32443/tcp": {}, "5000/tcp": {}, "8443/tcp": {} }, "Tty": true, "OpenStdin": false, "StdinOnce": false, "Env": [ "container=docker", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": null, "Image": "gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/usr/local/bin/entrypoint", "/sbin/init" ], "OnBuild": null, "Labels": { "created_by.minikube.sigs.k8s.io": "true", "mode.minikube.sigs.k8s.io": "pause-20210507222034-391940", "name.minikube.sigs.k8s.io": "pause-20210507222034-391940", "role.minikube.sigs.k8s.io": "" }, "StopSignal": "SIGRTMIN+3" }, "NetworkSettings": { "Bridge": "", "SandboxID": "8fb257f657a455a5c61064a642d32b732e65559debbded043f37bf425b0822a7", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33197" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33196" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33191" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33195" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33193" } ] }, "SandboxKey": "/var/run/docker/netns/8fb257f657a4", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "pause-20210507222034-391940": { "IPAMConfig": { "IPv4Address": "192.168.85.2" }, "Links": null, "Aliases": [ "1f3a30720296" ], "NetworkID": "66090a2bc48e8a0ec3403c0dc0bc3b1b9148ac10b973fc1dc8134d7bbd25b00c", "EndpointID": "e54131d7b1bfd577149c0050cd0ed718fe2b0322b3e8ec4e35cfefd08f0113ea", "Gateway": "192.168.85.1", "IPAddress": "192.168.85.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:55:02", "DriverOpts": null } } } } ] -- /stdout -- helpers_test.go:235: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210507222034-391940 -n pause-20210507222034-391940 helpers_test.go:235: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210507222034-391940 -n pause-20210507222034-391940: exit status 3 (2.449134759s) -- stdout -- Error -- /stdout -- ** stderr ** E0507 22:35:39.047822 643912 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:38890->127.0.0.1:33197: read: connection reset by peer E0507 22:35:39.047842 643912 status.go:247] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:38890->127.0.0.1:33197: read: connection reset by peer ** /stderr ** helpers_test.go:235: status error: exit status 3 (may be ok) helpers_test.go:237: "pause-20210507222034-391940" host is not running, skipping log retrieval (state="Error") === CONT TestPause/serial pause_test.go:59: Unable to run more tests (deadline exceeded) === CONT TestPause helpers_test.go:171: Cleaning up "pause-20210507222034-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p pause-20210507222034-391940 === CONT TestNetworkPlugins/group/auto/DNS net_test.go:144: (dbg) Run: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default net_test.go:144: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (61.730101ms) ** stderr ** Error from server (NotFound): the server could not find the requested resource ** /stderr ** net_test.go:144: (dbg) Run: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default net_test.go:144: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (55.451715ms) ** stderr ** Error from server (NotFound): the server could not find the requested resource ** /stderr ** net_test.go:144: (dbg) Run: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default net_test.go:144: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (72.431475ms) ** stderr ** Error from server (NotFound): the server could not find the requested resource ** /stderr ** net_test.go:144: (dbg) Run: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default net_test.go:144: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (69.880346ms) ** stderr ** Error from server (NotFound): the server could not find the requested resource ** /stderr ** net_test.go:144: (dbg) Run: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default net_test.go:144: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (56.152806ms) ** stderr ** Error from server (NotFound): the server could not find the requested resource ** /stderr ** net_test.go:144: (dbg) Run: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default net_test.go:144: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (56.078717ms) ** stderr ** Error from server (NotFound): the server could not find the requested resource ** /stderr ** E0507 22:36:52.729824 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory E0507 22:36:59.412134 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory net_test.go:144: (dbg) Run: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default net_test.go:144: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (58.003048ms) ** stderr ** Error from server (NotFound): the server could not find the requested resource ** /stderr ** E0507 22:37:09.423485 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory E0507 22:37:09.428793 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory E0507 22:37:09.439870 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory E0507 22:37:09.460667 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory E0507 22:37:09.500905 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory E0507 22:37:09.581402 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory E0507 22:37:09.742161 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory E0507 22:37:10.062709 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory E0507 22:37:10.703293 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory E0507 22:37:11.984274 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory E0507 22:37:14.544400 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory === CONT TestNetworkPlugins/group/cilium/Start net_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20210507223455-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker --container-runtime=containerd: (2m20.720143674s) === RUN TestNetworkPlugins/group/cilium/ControllerPod net_test.go:91: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ... helpers_test.go:335: "cilium-zgw2c" [7e2485bb-7a3f-4738-bd8b-f62f21ab84dd] Running E0507 22:37:19.665031 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory E0507 22:37:20.414810 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory net_test.go:91: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.012650631s === RUN TestNetworkPlugins/group/cilium/KubeletFlags net_test.go:99: (dbg) Run: out/minikube-linux-amd64 ssh -p cilium-20210507223455-391940 "pgrep -a kubelet" === RUN TestNetworkPlugins/group/cilium/NetCatPod net_test.go:113: (dbg) Run: kubectl --context cilium-20210507223455-391940 replace --force -f testdata/netcat-deployment.yaml net_test.go:127: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ... helpers_test.go:335: "netcat-66fbc655d5-2qg7s" [8e9b873b-1f8a-450d-a67e-dab0395e606a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils]) helpers_test.go:335: "netcat-66fbc655d5-2qg7s" [8e9b873b-1f8a-450d-a67e-dab0395e606a] Running net_test.go:127: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 8.005524928s === RUN TestNetworkPlugins/group/cilium/DNS net_test.go:144: (dbg) Run: kubectl --context cilium-20210507223455-391940 exec deployment/netcat -- nslookup kubernetes.default === RUN TestNetworkPlugins/group/cilium/Localhost net_test.go:163: (dbg) Run: kubectl --context cilium-20210507223455-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080" === RUN TestNetworkPlugins/group/cilium/HairPin net_test.go:176: (dbg) Run: kubectl --context cilium-20210507223455-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080" E0507 22:37:29.905319 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory === CONT TestNetworkPlugins/group/cilium net_test.go:192: "cilium" test finished in 16m55.861073944s, failed=false helpers_test.go:171: Cleaning up "cilium-20210507223455-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p cilium-20210507223455-391940 helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p cilium-20210507223455-391940: (3.15392927s) === CONT TestNetworkPlugins/group/calico === RUN TestNetworkPlugins/group/calico/Start net_test.go:83: (dbg) Run: out/minikube-linux-amd64 start -p calico-20210507223733-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker --container-runtime=containerd === CONT TestPause helpers_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 delete -p pause-20210507222034-391940: signal: killed (2m0.002365426s) -- stdout -- * Deleting "pause-20210507222034-391940" in docker ... * Deleting container "pause-20210507222034-391940" ... * Stopping node "pause-20210507222034-391940" ... * Powering off "pause-20210507222034-391940" via SSH ... -- /stdout -- ** stderr ** E0507 22:35:40.400397 644081 delete.go:56] error deleting container "pause-20210507222034-391940". You may want to delete it manually : delete pause-20210507222034-391940: docker rm -f -v pause-20210507222034-391940: exit status 1 stdout: stderr: Error response from daemon: removal of container pause-20210507222034-391940 is already in progress ** /stderr ** helpers_test.go:176: failed cleanup: signal: killed --- FAIL: TestPause (1025.00s) --- FAIL: TestPause/serial (904.99s) --- PASS: TestPause/serial/Start (167.34s) --- PASS: TestPause/serial/SecondStartNoReconfiguration (17.15s) --- PASS: TestPause/serial/Pause (0.67s) --- PASS: TestPause/serial/VerifyStatus (0.50s) --- PASS: TestPause/serial/Unpause (0.79s) --- PASS: TestPause/serial/PauseAgain (7.51s) --- FAIL: TestPause/serial/DeletePaused (711.05s) === CONT TestNetworkPlugins/group/custom-weave === RUN TestNetworkPlugins/group/custom-weave/Start net_test.go:83: (dbg) Run: out/minikube-linux-amd64 start -p custom-weave-20210507223739-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker --container-runtime=containerd E0507 22:37:40.847638 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory E0507 22:37:50.386062 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory E0507 22:38:08.533398 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory === CONT TestNetworkPlugins/group/auto/DNS net_test.go:144: (dbg) Run: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default === RUN TestNetworkPlugins/group/auto/Localhost net_test.go:163: (dbg) Run: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080" === RUN TestNetworkPlugins/group/auto/HairPin net_test.go:176: (dbg) Run: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080" net_test.go:186: hairpin connection unexpectedly succeeded - misconfigured test? === CONT TestNetworkPlugins/group/auto net_test.go:192: "auto" test finished in 17m35.474756322s, failed=true net_test.go:193: *** TestNetworkPlugins/group/auto FAILED at 2021-05-07 22:38:09.532422345 +0000 UTC m=+2923.406654217 helpers_test.go:218: -----------------------post-mortem-------------------------------- helpers_test.go:226: ======> post-mortem[TestNetworkPlugins/group/auto]: docker inspect <====== helpers_test.go:227: (dbg) Run: docker inspect auto-20210507223250-391940 helpers_test.go:231: (dbg) docker inspect auto-20210507223250-391940: -- stdout -- [ { "Id": "584172cb92872eabc1db5f4d470cb72c14881b3531344d18919ca64b2cc3a9cd", "Created": "2021-05-07T22:32:51.632588879Z", "Path": "/usr/local/bin/entrypoint", "Args": [ "/sbin/init" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 628977, "ExitCode": 0, "Error": "", "StartedAt": "2021-05-07T22:32:52.119688027Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:bcd131522525c9c3b8695a8d144be8d177bcd5614ec5397f188115d3be0bbc24", "ResolvConfPath": "/var/lib/docker/containers/584172cb92872eabc1db5f4d470cb72c14881b3531344d18919ca64b2cc3a9cd/resolv.conf", "HostnamePath": "/var/lib/docker/containers/584172cb92872eabc1db5f4d470cb72c14881b3531344d18919ca64b2cc3a9cd/hostname", "HostsPath": "/var/lib/docker/containers/584172cb92872eabc1db5f4d470cb72c14881b3531344d18919ca64b2cc3a9cd/hosts", "LogPath": "/var/lib/docker/containers/584172cb92872eabc1db5f4d470cb72c14881b3531344d18919ca64b2cc3a9cd/584172cb92872eabc1db5f4d470cb72c14881b3531344d18919ca64b2cc3a9cd-json.log", "Name": "/auto-20210507223250-391940", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "", "ExecIDs": null, "HostConfig": { "Binds": [ "/lib/modules:/lib/modules:ro", "auto-20210507223250-391940:/var" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "auto-20210507223250-391940", "PortBindings": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ] }, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "Capabilities": null, "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": true, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "seccomp=unconfined", "apparmor=unconfined", "label=disable" ], "Tmpfs": { "/run": "", "/tmp": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 2000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": null, "ReadonlyPaths": null }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/4c5e469ce5672e493ce809e45129bbf0bd19c3ce07446989def40d431fa86b0e-init/diff:/var/lib/docker/overlay2/1e5fa0ed3c3f4bec9b97cabd8aaa709f5915b54c42d527ba46e8ffa9ebcb7f9a/diff:/var/lib/docker/overlay2/00098e5ff94787f022c282488f937bf3694bcc2f80e6f324f2cb94189fadc609/diff:/var/lib/docker/overlay2/0751219afdacf9c8a75fced952b1ad013a8d5b6fbee07adc96e9f305877d0131/diff:/var/lib/docker/overlay2/4fed3d3ec94e4b275966ac815cabeee3572325ca655dcb69e8d31d2051468a10/diff:/var/lib/docker/overlay2/a78b251d86ddd3460876cbc21fef7421c2e76ba3f3198b79f3af7fe8092297f6/diff:/var/lib/docker/overlay2/f3609509e8e931753320e2da77988a3cdd78a58c167b428b96a3aa29971edb5e/diff:/var/lib/docker/overlay2/ebeb53c34330c6713e55bb0d98076f6618884e3bdcd6b888ad1965c69f65b14d/diff:/var/lib/docker/overlay2/1efdecf3c4a2226dd59cc51906581e2326beec3a6b7090c09e437b80c90794b0/diff:/var/lib/docker/overlay2/4c7309d0146fa644c2eb195cb344f6b10894237fb65248ee8391d1790ac7f765/diff:/var/lib/docker/overlay2/424a19d5d18bedf5b29c5b9ffd2c72e8c9e112f2fd414acd046bfa963d0526c7/diff:/var/lib/docker/overlay2/1846dd5e13995c56277d370ac401df36ad796851e8f2315dfab9ff02f487b8fc/diff:/var/lib/docker/overlay2/9393786bec1ad7d470bbbb5c7a94ec2131900fa0c6d2ad39b1039fc6795a2683/diff:/var/lib/docker/overlay2/708ff6a0ffe352ea29dabc0c453ebb09ccede3e24ae9f3fb51e06680ed43e597/diff:/var/lib/docker/overlay2/5a536ba767666ddc007ad059bfa077204239088ff6093831b1b5a0aff36a88ea/diff:/var/lib/docker/overlay2/1d4b0ac5e44186da0f4ee859bb5c23df30087789d88e253dfd57e0ffb21bb88c/diff:/var/lib/docker/overlay2/2b67d6a3428317a2f483420befe919fd660743c5f1494d075867507afe929344/diff:/var/lib/docker/overlay2/abef0f23a7f068f22910d10fcf3ed65c4804f84a4a9aa126a6ac79666f87ab63/diff:/var/lib/docker/overlay2/ec0c450f32e0e573b78fc8537f87456c96a10f353e8bb6e28b4cde51d4b78237/diff:/var/lib/docker/overlay2/ba3b904a6ce3d016a1ef237a88f0e5d4d3b08a8c68e6e4c808b54ffb59e19ee3/diff:/var/lib/docker/overlay2/160d3a3a918b002bb27e1f108db150483cfb4c1383ab9bea5f7d5b983af0f57f/diff:/var/lib/docker/overlay2/ed771b935b96f93ce682cdd9d22155225a918436de84fb5d56eb6214e36d7e27/diff:/var/lib/docker/overlay2/a298f74d3f51b9716985e7c6a84a4fe16a9badceeb4fbcc5847e9313a496c203/diff:/var/lib/docker/overlay2/7f4ddade1e222fcfd5747b07b270a54575ecfdbdf23dc72c6aa8984cb14b4f6b/diff:/var/lib/docker/overlay2/8522467e2a2b9517f0e9fe828bf20d40830fb4364323ea1b17c1ae43e68f1633/diff:/var/lib/docker/overlay2/7b8ac1e2dcffd2cd29a0fe315f23ba717abac176d21484016b19e33e1ceb3f15/diff:/var/lib/docker/overlay2/219fbaff646669aefdda08db39e5c449632d42e036ba372e6fbfd2e74d05895c/diff:/var/lib/docker/overlay2/169017ab906e8cd6c768272fbbd27db4564b7ea84520773194f7b8d1c5725ce4/diff:/var/lib/docker/overlay2/3f2355256f7a67382c67f2079a79f9a3568cd4aac75dcb8e549d040ea3e3801c/diff:/var/lib/docker/overlay2/049eedb4ea37711e06782dfa1648c66d0e215e8b8eb540da6bd9b7729e88b4c6/diff:/var/lib/docker/overlay2/685ece42c012e8b988affc555e627ea46a42003f7fb6511dc68fb9da6c515fd8/diff:/var/lib/docker/overlay2/224f8f237d1ebeb57711074d5b9338b377abc164e67d85cd8b48264062798e8a/diff:/var/lib/docker/overlay2/280191c44865a7db266046c55f36cee27c985b893bca0a97310569a5df684c8a/diff:/var/lib/docker/overlay2/2a04e90c25bcb0264edd485b59f54c8e6c28a2d0c63f696590f1876b164e0ad8/diff:/var/lib/docker/overlay2/9c5536844b05a6fcc7c6de17ba2cd59669716e44474ac06421119d86c04f197e/diff:/var/lib/docker/overlay2/0db732ad07139625742260350f06f46f9978ae313af26f4afdab09884382542c/diff:/var/lib/docker/overlay2/d7e4510c4ab4dcfcd652b63a086da8e4f53866cf61cc72dfacd6e24a7ba895ac/diff", "MergedDir": "/var/lib/docker/overlay2/4c5e469ce5672e493ce809e45129bbf0bd19c3ce07446989def40d431fa86b0e/merged", "UpperDir": "/var/lib/docker/overlay2/4c5e469ce5672e493ce809e45129bbf0bd19c3ce07446989def40d431fa86b0e/diff", "WorkDir": "/var/lib/docker/overlay2/4c5e469ce5672e493ce809e45129bbf0bd19c3ce07446989def40d431fa86b0e/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "volume", "Name": "auto-20210507223250-391940", "Source": "/var/lib/docker/volumes/auto-20210507223250-391940/_data", "Destination": "/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" }, { "Type": "bind", "Source": "/lib/modules", "Destination": "/lib/modules", "Mode": "ro", "RW": false, "Propagation": "rprivate" } ], "Config": { "Hostname": "auto-20210507223250-391940", "Domainname": "", "User": "root", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "22/tcp": {}, "2376/tcp": {}, "32443/tcp": {}, "5000/tcp": {}, "8443/tcp": {} }, "Tty": true, "OpenStdin": false, "StdinOnce": false, "Env": [ "container=docker", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": null, "Image": "gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/usr/local/bin/entrypoint", "/sbin/init" ], "OnBuild": null, "Labels": { "created_by.minikube.sigs.k8s.io": "true", "mode.minikube.sigs.k8s.io": "auto-20210507223250-391940", "name.minikube.sigs.k8s.io": "auto-20210507223250-391940", "role.minikube.sigs.k8s.io": "" }, "StopSignal": "SIGRTMIN+3" }, "NetworkSettings": { "Bridge": "", "SandboxID": "4888d3d0758338ca13bf27a0e0ecc192a3f8756cc980d0479286e98622f34b62", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33286" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33285" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33282" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33284" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33283" } ] }, "SandboxKey": "/var/run/docker/netns/4888d3d07583", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "auto-20210507223250-391940": { "IPAMConfig": { "IPv4Address": "192.168.58.2" }, "Links": null, "Aliases": [ "584172cb9287" ], "NetworkID": "d814ab98e4bf8749e47b5ef0d6ace979f3007e4564d0c460cc5dd7cf56350ffe", "EndpointID": "1f32e52284aadf3bf9e17859242829a8e5700f1a7bd08cab86b8af4e39114337", "Gateway": "192.168.58.1", "IPAddress": "192.168.58.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:3a:02", "DriverOpts": null } } } } ] -- /stdout -- helpers_test.go:235: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p auto-20210507223250-391940 -n auto-20210507223250-391940 helpers_test.go:240: <<< TestNetworkPlugins/group/auto FAILED: start of post-mortem logs <<< helpers_test.go:241: ======> post-mortem[TestNetworkPlugins/group/auto]: minikube logs <====== helpers_test.go:243: (dbg) Run: out/minikube-linux-amd64 -p auto-20210507223250-391940 logs -n 25 helpers_test.go:248: TestNetworkPlugins/group/auto logs: -- stdout -- * * ==> Audit <== * |---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------| | start | -p newest-cni-20210507223028-391940 --memory=2200 | newest-cni-20210507223028-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:31:36 UTC | Fri, 07 May 2021 22:32:43 UTC | | | --alsologtostderr --wait=apiserver,system_pods,default_sa | | | | | | | | --feature-gates ServerSideApply=true --network-plugin=cni | | | | | | | | --extra-config=kubelet.network-plugin=cni | | | | | | | | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 | | | | | | | | --driver=docker --container-runtime=containerd | | | | | | | | --kubernetes-version=v1.22.0-alpha.1 | | | | | | | stop | -p | default-k8s-different-port-20210507222942-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:32:18 UTC | Fri, 07 May 2021 22:32:43 UTC | | | default-k8s-different-port-20210507222942-391940 | | | | | | | | --alsologtostderr -v=3 | | | | | | | addons | enable dashboard -p | default-k8s-different-port-20210507222942-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:32:44 UTC | Fri, 07 May 2021 22:32:44 UTC | | | default-k8s-different-port-20210507222942-391940 | | | | | | | ssh | -p | newest-cni-20210507223028-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:32:44 UTC | Fri, 07 May 2021 22:32:44 UTC | | | newest-cni-20210507223028-391940 | | | | | | | | sudo crictl images -o json | | | | | | | pause | -p | newest-cni-20210507223028-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:32:44 UTC | Fri, 07 May 2021 22:32:44 UTC | | | newest-cni-20210507223028-391940 | | | | | | | | --alsologtostderr -v=1 | | | | | | | unpause | -p | newest-cni-20210507223028-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:32:45 UTC | Fri, 07 May 2021 22:32:45 UTC | | | newest-cni-20210507223028-391940 | | | | | | | | --alsologtostderr -v=1 | | | | | | | delete | -p | newest-cni-20210507223028-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:32:46 UTC | Fri, 07 May 2021 22:32:49 UTC | | | newest-cni-20210507223028-391940 | | | | | | | delete | -p | newest-cni-20210507223028-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:32:49 UTC | Fri, 07 May 2021 22:32:50 UTC | | | newest-cni-20210507223028-391940 | | | | | | | start | -p | embed-certs-20210507222849-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:31:32 UTC | Fri, 07 May 2021 22:33:22 UTC | | | embed-certs-20210507222849-391940 | | | | | | | | --memory=2200 --alsologtostderr | | | | | | | | --wait=true --embed-certs | | | | | | | | --driver=docker | | | | | | | | --container-runtime=containerd | | | | | | | | --kubernetes-version=v1.20.2 | | | | | | | ssh | -p | embed-certs-20210507222849-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:33:35 UTC | Fri, 07 May 2021 22:33:35 UTC | | | embed-certs-20210507222849-391940 | | | | | | | | sudo crictl images -o json | | | | | | | pause | -p | embed-certs-20210507222849-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:33:35 UTC | Fri, 07 May 2021 22:33:35 UTC | | | embed-certs-20210507222849-391940 | | | | | | | | --alsologtostderr -v=1 | | | | | | | unpause | -p | embed-certs-20210507222849-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:33:36 UTC | Fri, 07 May 2021 22:33:37 UTC | | | embed-certs-20210507222849-391940 | | | | | | | | --alsologtostderr -v=1 | | | | | | | delete | -p | embed-certs-20210507222849-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:33:37 UTC | Fri, 07 May 2021 22:33:41 UTC | | | embed-certs-20210507222849-391940 | | | | | | | delete | -p | embed-certs-20210507222849-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:33:41 UTC | Fri, 07 May 2021 22:33:41 UTC | | | embed-certs-20210507222849-391940 | | | | | | | start | -p | default-k8s-different-port-20210507222942-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:32:44 UTC | Fri, 07 May 2021 22:34:37 UTC | | | default-k8s-different-port-20210507222942-391940 | | | | | | | | --memory=2200 --alsologtostderr --wait=true | | | | | | | | --apiserver-port=8444 --driver=docker | | | | | | | | --container-runtime=containerd | | | | | | | | --kubernetes-version=v1.20.2 | | | | | | | ssh | -p | default-k8s-different-port-20210507222942-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:34:48 UTC | Fri, 07 May 2021 22:34:48 UTC | | | default-k8s-different-port-20210507222942-391940 | | | | | | | | sudo crictl images -o json | | | | | | | pause | -p | default-k8s-different-port-20210507222942-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:34:48 UTC | Fri, 07 May 2021 22:34:49 UTC | | | default-k8s-different-port-20210507222942-391940 | | | | | | | | --alsologtostderr -v=1 | | | | | | | unpause | -p | default-k8s-different-port-20210507222942-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:34:49 UTC | Fri, 07 May 2021 22:34:50 UTC | | | default-k8s-different-port-20210507222942-391940 | | | | | | | | --alsologtostderr -v=1 | | | | | | | delete | -p | default-k8s-different-port-20210507222942-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:34:51 UTC | Fri, 07 May 2021 22:34:54 UTC | | | default-k8s-different-port-20210507222942-391940 | | | | | | | delete | -p | default-k8s-different-port-20210507222942-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:34:54 UTC | Fri, 07 May 2021 22:34:55 UTC | | | default-k8s-different-port-20210507222942-391940 | | | | | | | start | -p auto-20210507223250-391940 | auto-20210507223250-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:32:50 UTC | Fri, 07 May 2021 22:35:18 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --driver=docker | | | | | | | | --container-runtime=containerd | | | | | | | ssh | -p auto-20210507223250-391940 | auto-20210507223250-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:35:18 UTC | Fri, 07 May 2021 22:35:18 UTC | | | pgrep -a kubelet | | | | | | | start | -p | cilium-20210507223455-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:34:55 UTC | Fri, 07 May 2021 22:37:15 UTC | | | cilium-20210507223455-391940 | | | | | | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=cilium --driver=docker | | | | | | | | --container-runtime=containerd | | | | | | | ssh | -p | cilium-20210507223455-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:37:20 UTC | Fri, 07 May 2021 22:37:21 UTC | | | cilium-20210507223455-391940 | | | | | | | | pgrep -a kubelet | | | | | | | delete | -p | cilium-20210507223455-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:37:29 UTC | Fri, 07 May 2021 22:37:33 UTC | | | cilium-20210507223455-391940 | | | | | | |---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2021/05/07 22:37:39 Running on machine: debian-jenkins-agent-11 Binary: Built with gc go1.16.1 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0507 22:37:39.112187 649966 out.go:291] Setting OutFile to fd 1 ... I0507 22:37:39.112266 649966 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:37:39.112271 649966 out.go:304] Setting ErrFile to fd 2... I0507 22:37:39.112276 649966 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:37:39.112402 649966 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/bin I0507 22:37:39.112689 649966 out.go:298] Setting JSON to false I0507 22:37:39.150760 649966 start.go:108] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":11827,"bootTime":1620415232,"procs":356,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"} I0507 22:37:39.150871 649966 start.go:118] virtualization: kvm guest I0507 22:37:39.154604 649966 out.go:170] * [custom-weave-20210507223739-391940] minikube v1.20.0 on Debian 9.13 (kvm/amd64) I0507 22:37:39.156274 649966 out.go:170] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig I0507 22:37:39.158021 649966 out.go:170] - MINIKUBE_BIN=out/minikube-linux-amd64 I0507 22:37:39.159569 649966 out.go:170] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube I0507 22:37:39.161030 649966 out.go:170] - MINIKUBE_LOCATION=master I0507 22:37:39.161708 649966 driver.go:322] Setting default libvirt URI to qemu:///system I0507 22:37:39.214022 649966 docker.go:119] docker version: linux-19.03.15 I0507 22:37:39.214107 649966 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0507 22:37:39.297831 649966 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:true NGoroutines:79 SystemTime:2021-05-07 22:37:39.249808134 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0507 22:37:39.297941 649966 docker.go:225] overlay module found I0507 22:37:39.300479 649966 out.go:170] * Using the docker driver based on user configuration I0507 22:37:39.300510 649966 start.go:276] selected driver: docker I0507 22:37:39.300516 649966 start.go:718] validating driver "docker" against I0507 22:37:39.300532 649966 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} W0507 22:37:39.300575 649966 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0507 22:37:39.300585 649966 out.go:424] no arguments passed for "! Your cgroup does not allow setting memory.\n" - returning raw string W0507 22:37:39.300601 649966 out.go:235] ! Your cgroup does not allow setting memory. W0507 22:37:39.300609 649966 out.go:424] no arguments passed for " - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities\n" - returning raw string I0507 22:37:39.302083 649966 out.go:170] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0507 22:37:39.302978 649966 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0507 22:37:39.390669 649966 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:true NGoroutines:79 SystemTime:2021-05-07 22:37:39.339676232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0507 22:37:39.390764 649966 start_flags.go:259] no existing cluster config was found, will generate one from the flags I0507 22:37:39.390945 649966 start_flags.go:733] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0507 22:37:39.390970 649966 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml" I0507 22:37:39.390985 649966 start_flags.go:268] Found "testdata/weavenet.yaml" CNI - setting NetworkPlugin=cni I0507 22:37:39.390998 649966 start_flags.go:273] config: {Name:custom-weave-20210507223739-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:custom-weave-20210507223739-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0507 22:37:39.393178 649966 out.go:170] * Starting control plane node custom-weave-20210507223739-391940 in cluster custom-weave-20210507223739-391940 I0507 22:37:39.393227 649966 cache.go:111] Beginning downloading kic base image for docker with containerd W0507 22:37:39.393237 649966 out.go:424] no arguments passed for "* Pulling base image ...\n" - returning raw string W0507 22:37:39.393259 649966 out.go:424] no arguments passed for "* Pulling base image ...\n" - returning raw string I0507 22:37:39.394829 649966 out.go:170] * Pulling base image ... I0507 22:37:39.394876 649966 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd I0507 22:37:39.394927 649966 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 I0507 22:37:39.394940 649966 cache.go:54] Caching tarball of preloaded images I0507 22:37:39.394955 649966 preload.go:132] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download I0507 22:37:39.394965 649966 cache.go:57] Finished verifying existence of preloaded tar for v1.20.2 on containerd I0507 22:37:39.394968 649966 image.go:116] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory I0507 22:37:39.394995 649966 image.go:119] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory, skipping pull I0507 22:37:39.395002 649966 cache.go:131] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in cache, skipping pull I0507 22:37:39.395030 649966 image.go:130] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon I0507 22:37:39.395088 649966 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/config.json ... I0507 22:37:39.395118 649966 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/config.json: {Name:mk196c3fc0d670fa4aa2c1ff12c3193d9becc3ae Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:37:39.476258 649966 image.go:134] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon, skipping pull I0507 22:37:39.476285 649966 cache.go:155] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in daemon, skipping pull I0507 22:37:39.476302 649966 cache.go:194] Successfully downloaded all kic artifacts I0507 22:37:39.476337 649966 start.go:313] acquiring machines lock for custom-weave-20210507223739-391940: {Name:mk76a5c9479df4112c0c61cdd7a927339ff69574 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0507 22:37:39.476471 649966 start.go:317] acquired machines lock for "custom-weave-20210507223739-391940" in 109.962µs I0507 22:37:39.476511 649966 start.go:89] Provisioning new machine with config: &{Name:custom-weave-20210507223739-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:custom-weave-20210507223739-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} I0507 22:37:39.476616 649966 start.go:126] createHost starting for "" (driver="docker") I0507 22:37:38.636044 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:41.136299 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:38.628258 648662 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20210507223733-391940:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir: (4.151640511s) I0507 22:37:38.628301 648662 kic.go:188] duration metric: took 4.151846 seconds to extract preloaded images to volume I0507 22:37:38.628366 648662 cli_runner.go:115] Run: docker container inspect calico-20210507223733-391940 --format={{.State.Status}} I0507 22:37:38.667331 648662 machine.go:88] provisioning docker machine ... I0507 22:37:38.667377 648662 ubuntu.go:169] provisioning hostname "calico-20210507223733-391940" I0507 22:37:38.667433 648662 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210507223733-391940 I0507 22:37:38.705203 648662 main.go:128] libmachine: Using SSH client type: native I0507 22:37:38.705414 648662 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 33301 } I0507 22:37:38.705436 648662 main.go:128] libmachine: About to run SSH command: sudo hostname calico-20210507223733-391940 && echo "calico-20210507223733-391940" | sudo tee /etc/hostname I0507 22:37:38.839316 648662 main.go:128] libmachine: SSH cmd err, output: : calico-20210507223733-391940 I0507 22:37:38.839400 648662 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210507223733-391940 I0507 22:37:38.881294 648662 main.go:128] libmachine: Using SSH client type: native I0507 22:37:38.881486 648662 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 33301 } I0507 22:37:38.881509 648662 main.go:128] libmachine: About to run SSH command: if ! grep -xq '.*\scalico-20210507223733-391940' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20210507223733-391940/g' /etc/hosts; else echo '127.0.1.1 calico-20210507223733-391940' | sudo tee -a /etc/hosts; fi fi I0507 22:37:38.994853 648662 main.go:128] libmachine: SSH cmd err, output: : I0507 22:37:38.994900 648662 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube} I0507 22:37:38.994942 648662 ubuntu.go:177] setting up certificates I0507 22:37:38.994954 648662 provision.go:83] configureAuth start I0507 22:37:38.995015 648662 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210507223733-391940 I0507 22:37:39.034039 648662 provision.go:137] copyHostCerts I0507 22:37:39.034096 648662 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem, removing ... I0507 22:37:39.034109 648662 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem I0507 22:37:39.034173 648662 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem (1078 bytes) I0507 22:37:39.034238 648662 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem, removing ... I0507 22:37:39.034248 648662 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem I0507 22:37:39.034268 648662 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem (1123 bytes) I0507 22:37:39.034351 648662 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem, removing ... I0507 22:37:39.034360 648662 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem I0507 22:37:39.034383 648662 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem (1675 bytes) I0507 22:37:39.034424 648662 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem org=jenkins.calico-20210507223733-391940 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20210507223733-391940] I0507 22:37:39.147893 648662 provision.go:165] copyRemoteCerts I0507 22:37:39.147938 648662 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0507 22:37:39.147980 648662 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210507223733-391940 I0507 22:37:39.190105 648662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33301 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/calico-20210507223733-391940/id_rsa Username:docker} I0507 22:37:39.278963 648662 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes) I0507 22:37:39.297705 648662 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0507 22:37:39.314446 648662 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes) I0507 22:37:39.330910 648662 provision.go:86] duration metric: configureAuth took 335.943867ms I0507 22:37:39.330937 648662 ubuntu.go:193] setting minikube options for container-runtime I0507 22:37:39.331088 648662 machine.go:91] provisioned docker machine in 663.735504ms I0507 22:37:39.331102 648662 client.go:171] LocalClient.Create took 5.858499775s I0507 22:37:39.331126 648662 start.go:168] duration metric: libmachine.API.Create for "calico-20210507223733-391940" took 5.858553646s I0507 22:37:39.331139 648662 start.go:267] post-start starting for "calico-20210507223733-391940" (driver="docker") I0507 22:37:39.331145 648662 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0507 22:37:39.331199 648662 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0507 22:37:39.331244 648662 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210507223733-391940 I0507 22:37:39.373651 648662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33301 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/calico-20210507223733-391940/id_rsa Username:docker} I0507 22:37:39.459445 648662 ssh_runner.go:149] Run: cat /etc/os-release I0507 22:37:39.462421 648662 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0507 22:37:39.462449 648662 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0507 22:37:39.462469 648662 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0507 22:37:39.462481 648662 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0507 22:37:39.462497 648662 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/addons for local assets ... I0507 22:37:39.462555 648662 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/files for local assets ... I0507 22:37:39.462698 648662 start.go:270] post-start completed in 131.551187ms I0507 22:37:39.463081 648662 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210507223733-391940 I0507 22:37:39.504189 648662 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/config.json ... I0507 22:37:39.504415 648662 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0507 22:37:39.504465 648662 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210507223733-391940 I0507 22:37:39.548258 648662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33301 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/calico-20210507223733-391940/id_rsa Username:docker} I0507 22:37:39.631803 648662 start.go:129] duration metric: createHost completed in 6.161849611s I0507 22:37:39.631832 648662 start.go:80] releasing machines lock for "calico-20210507223733-391940", held for 6.162010714s I0507 22:37:39.631907 648662 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210507223733-391940 I0507 22:37:39.673778 648662 ssh_runner.go:149] Run: systemctl --version I0507 22:37:39.673850 648662 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210507223733-391940 I0507 22:37:39.673858 648662 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/ I0507 22:37:39.673936 648662 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210507223733-391940 I0507 22:37:39.719066 648662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33301 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/calico-20210507223733-391940/id_rsa Username:docker} I0507 22:37:39.720832 648662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33301 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/calico-20210507223733-391940/id_rsa Username:docker} I0507 22:37:39.863927 648662 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio I0507 22:37:39.873600 648662 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker I0507 22:37:39.882043 648662 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket I0507 22:37:39.897194 648662 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service I0507 22:37:39.905608 648662 ssh_runner.go:149] Run: sudo systemctl disable docker.socket I0507 22:37:39.972641 648662 ssh_runner.go:149] Run: sudo systemctl mask docker.service I0507 22:37:40.038060 648662 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker I0507 22:37:40.047120 648662 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0507 22:37:40.059729 648662 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjIiCiAgICBzdGF0c19jb2xsZWN0X3BlcmlvZCA9IDEwCiAgICBzeXN0ZW1kX2Nncm91cCA9IGZhbHNlCiAgICBlbmFibGVfdGxzX3N0cmVhbWluZyA9IGZhbHNlCiAgICBtYXhfY29udGFpbmVyX2xvZ19saW5lX3NpemUgPSAxNjM4NAogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgbm9fcGl2b3QgPSB0cnVlCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW50aW1lLnYxLmxpbnV4IgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMubGludXhdCiAgICBzaGltID0gImNvbnRhaW5lcmQtc2hpbSIKICAgIHJ1bnRpbWUgPSAicnVuYyIKICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICBub19zaGltID0gZmFsc2UKICAgIHNoaW1fZGVidWcgPSBmYWxzZQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml" I0507 22:37:40.072520 648662 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0507 22:37:40.079398 648662 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255 stdout: stderr: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory I0507 22:37:40.079467 648662 ssh_runner.go:149] Run: sudo modprobe br_netfilter I0507 22:37:40.086188 648662 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0507 22:37:40.092595 648662 ssh_runner.go:149] Run: sudo systemctl daemon-reload I0507 22:37:40.153766 648662 ssh_runner.go:149] Run: sudo systemctl restart containerd I0507 22:37:40.234267 648662 start.go:368] Will wait 60s for socket path /run/containerd/containerd.sock I0507 22:37:40.234337 648662 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock I0507 22:37:40.237841 648662 start.go:393] Will wait 60s for crictl version I0507 22:37:40.237892 648662 ssh_runner.go:149] Run: sudo crictl version I0507 22:37:40.263414 648662 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1 stdout: stderr: time="2021-05-07T22:37:40Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet" I0507 22:37:39.479774 649966 out.go:197] * Creating docker container (CPUs=2, Memory=2048MB) ... I0507 22:37:39.480079 649966 start.go:160] libmachine.API.Create for "custom-weave-20210507223739-391940" (driver="docker") I0507 22:37:39.480131 649966 client.go:168] LocalClient.Create starting I0507 22:37:39.480224 649966 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem I0507 22:37:39.480269 649966 main.go:128] libmachine: Decoding PEM data... I0507 22:37:39.480302 649966 main.go:128] libmachine: Parsing certificate... I0507 22:37:39.480474 649966 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem I0507 22:37:39.480505 649966 main.go:128] libmachine: Decoding PEM data... I0507 22:37:39.480536 649966 main.go:128] libmachine: Parsing certificate... I0507 22:37:39.481420 649966 cli_runner.go:115] Run: docker network inspect custom-weave-20210507223739-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0507 22:37:39.521213 649966 cli_runner.go:162] docker network inspect custom-weave-20210507223739-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0507 22:37:39.521321 649966 network_create.go:249] running [docker network inspect custom-weave-20210507223739-391940] to gather additional debugging logs... I0507 22:37:39.521348 649966 cli_runner.go:115] Run: docker network inspect custom-weave-20210507223739-391940 W0507 22:37:39.561436 649966 cli_runner.go:162] docker network inspect custom-weave-20210507223739-391940 returned with exit code 1 I0507 22:37:39.561467 649966 network_create.go:252] error running [docker network inspect custom-weave-20210507223739-391940]: docker network inspect custom-weave-20210507223739-391940: exit status 1 stdout: [] stderr: Error: No such network: custom-weave-20210507223739-391940 I0507 22:37:39.561482 649966 network_create.go:254] output of [docker network inspect custom-weave-20210507223739-391940]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: custom-weave-20210507223739-391940 ** /stderr ** I0507 22:37:39.561530 649966 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0507 22:37:39.602410 649966 network.go:215] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b7a55e9e83b1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:be:99:f6:89}} I0507 22:37:39.603583 649966 network.go:215] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-d814ab98e4bf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:cf:75:be:bd}} I0507 22:37:39.604701 649966 network.go:215] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-dd4724a55dc0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:f4:8f:2d:6d}} I0507 22:37:39.605761 649966 network.go:215] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName:br-847e5338b1bf IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:0f:38:9d:c0}} I0507 22:37:39.606825 649966 network.go:215] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName:br-66090a2bc48e IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:37:3c:b5:91}} I0507 22:37:39.607925 649966 network.go:263] reserving subnet 192.168.94.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.94.0:0xc000b1c120] misses:0} I0507 22:37:39.607983 649966 network.go:210] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0507 22:37:39.607998 649966 network_create.go:100] attempt to create docker network custom-weave-20210507223739-391940 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ... I0507 22:37:39.608054 649966 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20210507223739-391940 I0507 22:37:39.685371 649966 network_create.go:84] docker network custom-weave-20210507223739-391940 192.168.94.0/24 created I0507 22:37:39.685435 649966 kic.go:106] calculated static IP "192.168.94.2" for the "custom-weave-20210507223739-391940" container I0507 22:37:39.685533 649966 cli_runner.go:115] Run: docker ps -a --format {{.Names}} I0507 22:37:39.734069 649966 cli_runner.go:115] Run: docker volume create custom-weave-20210507223739-391940 --label name.minikube.sigs.k8s.io=custom-weave-20210507223739-391940 --label created_by.minikube.sigs.k8s.io=true I0507 22:37:39.778433 649966 oci.go:102] Successfully created a docker volume custom-weave-20210507223739-391940 I0507 22:37:39.778522 649966 cli_runner.go:115] Run: docker run --rm --name custom-weave-20210507223739-391940-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20210507223739-391940 --entrypoint /usr/bin/test -v custom-weave-20210507223739-391940:/var gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib I0507 22:37:40.548788 649966 oci.go:106] Successfully prepared a docker volume custom-weave-20210507223739-391940 W0507 22:37:40.548852 649966 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0507 22:37:40.548861 649966 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0507 22:37:40.548878 649966 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd I0507 22:37:40.548927 649966 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 I0507 22:37:40.548967 649966 kic.go:179] Starting extracting preloaded images to volume ... I0507 22:37:40.548933 649966 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'" I0507 22:37:40.549034 649966 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20210507223739-391940:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir I0507 22:37:40.635636 649966 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20210507223739-391940 --name custom-weave-20210507223739-391940 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20210507223739-391940 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20210507223739-391940 --network custom-weave-20210507223739-391940 --ip 192.168.94.2 --volume custom-weave-20210507223739-391940:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e I0507 22:37:41.169973 649966 cli_runner.go:115] Run: docker container inspect custom-weave-20210507223739-391940 --format={{.State.Running}} I0507 22:37:41.214091 649966 cli_runner.go:115] Run: docker container inspect custom-weave-20210507223739-391940 --format={{.State.Status}} I0507 22:37:41.261394 649966 cli_runner.go:115] Run: docker exec custom-weave-20210507223739-391940 stat /var/lib/dpkg/alternatives/iptables I0507 22:37:41.395410 649966 oci.go:278] the created container "custom-weave-20210507223739-391940" has a running status. I0507 22:37:41.395453 649966 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/custom-weave-20210507223739-391940/id_rsa... I0507 22:37:41.782473 649966 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/custom-weave-20210507223739-391940/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0507 22:37:42.221921 649966 cli_runner.go:115] Run: docker container inspect custom-weave-20210507223739-391940 --format={{.State.Status}} I0507 22:37:42.267524 649966 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0507 22:37:42.267548 649966 kic_runner.go:115] Args: [docker exec --privileged custom-weave-20210507223739-391940 chown docker:docker /home/docker/.ssh/authorized_keys] I0507 22:37:43.137705 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:45.659738 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:51.310207 648662 ssh_runner.go:149] Run: sudo crictl version I0507 22:37:51.341781 648662 start.go:402] Version: 0.1.0 RuntimeName: containerd RuntimeVersion: 1.4.4 RuntimeApiVersion: v1alpha2 I0507 22:37:51.341854 648662 ssh_runner.go:149] Run: containerd --version I0507 22:37:48.135824 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:53.305507 648662 out.go:170] * Preparing Kubernetes v1.20.2 on containerd 1.4.4 ... I0507 22:37:53.305642 648662 cli_runner.go:115] Run: docker network inspect calico-20210507223733-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0507 22:37:53.345468 648662 ssh_runner.go:149] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts I0507 22:37:53.349179 648662 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0507 22:37:53.358537 648662 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/client.crt I0507 22:37:53.358661 648662 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/client.key I0507 22:37:53.358796 648662 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd I0507 22:37:53.358825 648662 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 I0507 22:37:53.358870 648662 ssh_runner.go:149] Run: sudo crictl images --output json I0507 22:37:53.453606 648662 containerd.go:571] all images are preloaded for containerd runtime. I0507 22:37:53.453628 648662 containerd.go:481] Images already preloaded, skipping extraction I0507 22:37:53.453678 648662 ssh_runner.go:149] Run: sudo crictl images --output json I0507 22:37:53.475342 648662 containerd.go:571] all images are preloaded for containerd runtime. I0507 22:37:53.475362 648662 cache_images.go:74] Images are preloaded, skipping loading I0507 22:37:53.475408 648662 ssh_runner.go:149] Run: sudo crictl info I0507 22:37:53.496493 648662 cni.go:93] Creating CNI manager for "calico" I0507 22:37:53.496519 648662 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0507 22:37:53.496533 648662 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20210507223733-391940 NodeName:calico-20210507223733-391940 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0507 22:37:53.496647 648662 kubeadm.go:157] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.76.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /run/containerd/containerd.sock name: "calico-20210507223733-391940" kubeletExtraArgs: node-ip: 192.168.76.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.76.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.20.2 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 I0507 22:37:53.496726 648662 kubeadm.go:901] kubelet [Unit] Wants=containerd.service [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=calico-20210507223733-391940 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m [Install] config: {KubernetesVersion:v1.20.2 ClusterName:calico-20210507223733-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} I0507 22:37:53.496771 648662 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2 I0507 22:37:53.503279 648662 binaries.go:44] Found k8s binaries, skipping transfer I0507 22:37:53.503331 648662 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0507 22:37:53.509550 648662 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (542 bytes) I0507 22:37:53.521293 648662 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0507 22:37:53.532745 648662 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1867 bytes) I0507 22:37:53.544674 648662 ssh_runner.go:149] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts I0507 22:37:53.547360 648662 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0507 22:37:54.256136 648662 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940 for IP: 192.168.76.2 I0507 22:37:54.256202 648662 certs.go:171] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.key I0507 22:37:54.256221 648662 certs.go:171] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.key I0507 22:37:54.256284 648662 certs.go:282] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/client.key I0507 22:37:54.256323 648662 certs.go:286] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/apiserver.key.31bdca25 I0507 22:37:54.256335 648662 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1] I0507 22:37:54.318838 648662 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/apiserver.crt.31bdca25 ... I0507 22:37:54.318864 648662 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/apiserver.crt.31bdca25: {Name:mkf5b69b4761072f8b788bf02c8b8eedf7de007a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:37:54.319043 648662 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/apiserver.key.31bdca25 ... I0507 22:37:54.319054 648662 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/apiserver.key.31bdca25: {Name:mk4c0dfc6d8a8e6f20ffdc28fb8781518e46a817 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:37:54.319130 648662 certs.go:297] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/apiserver.crt I0507 22:37:54.319190 648662 certs.go:301] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/apiserver.key I0507 22:37:54.319239 648662 certs.go:286] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/proxy-client.key I0507 22:37:54.319248 648662 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/proxy-client.crt with IP's: [] I0507 22:37:54.638822 648662 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/proxy-client.crt ... I0507 22:37:54.638855 648662 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/proxy-client.crt: {Name:mkb82c228d5a85fb1b2cc325a8badb55c6eeaa73 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:37:54.639041 648662 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/proxy-client.key ... I0507 22:37:54.639054 648662 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/proxy-client.key: {Name:mk98a022f3aa2df8ce4c9a707f5d7890fe1c30de Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:37:54.639220 648662 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940.pem (1338 bytes) W0507 22:37:54.639271 648662 certs.go:357] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940_empty.pem, impossibly tiny 0 bytes I0507 22:37:54.639289 648662 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem (1679 bytes) I0507 22:37:54.639317 648662 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem (1078 bytes) I0507 22:37:54.639341 648662 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem (1123 bytes) I0507 22:37:54.639376 648662 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/key.pem (1675 bytes) I0507 22:37:54.640287 648662 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0507 22:37:54.657767 648662 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0507 22:37:54.673704 648662 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0507 22:37:54.689232 648662 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0507 22:37:54.705254 648662 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0507 22:37:54.721006 648662 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0507 22:37:54.736665 648662 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0507 22:37:54.752319 648662 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0507 22:37:54.767781 648662 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0507 22:37:54.783614 648662 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940.pem --> /usr/share/ca-certificates/391940.pem (1338 bytes) I0507 22:37:54.799319 648662 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0507 22:37:54.810754 648662 ssh_runner.go:149] Run: openssl version I0507 22:37:54.815430 648662 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391940.pem && ln -fs /usr/share/ca-certificates/391940.pem /etc/ssl/certs/391940.pem" I0507 22:37:54.822318 648662 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/391940.pem I0507 22:37:54.825209 648662 certs.go:402] hashing: -rw-r--r-- 1 root root 1338 May 7 21:57 /usr/share/ca-certificates/391940.pem I0507 22:37:54.825261 648662 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391940.pem I0507 22:37:54.829696 648662 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391940.pem /etc/ssl/certs/51391683.0" I0507 22:37:54.836441 648662 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0507 22:37:54.843069 648662 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0507 22:37:54.845897 648662 certs.go:402] hashing: -rw-r--r-- 1 root root 1111 May 7 21:50 /usr/share/ca-certificates/minikubeCA.pem I0507 22:37:54.845940 648662 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0507 22:37:54.850978 648662 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0507 22:37:54.857686 648662 kubeadm.go:381] StartCluster: {Name:calico-20210507223733-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:calico-20210507223733-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0507 22:37:54.857770 648662 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]} I0507 22:37:54.857811 648662 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system" I0507 22:37:54.880407 648662 cri.go:76] found id: "" I0507 22:37:54.880461 648662 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0507 22:37:54.886803 648662 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0507 22:37:54.893043 648662 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver I0507 22:37:54.893086 648662 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0507 22:37:54.899178 648662 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0507 22:37:54.899214 648662 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" W0507 22:37:55.650191 648662 out.go:424] no arguments passed for " - Generating certificates and keys ..." - returning raw string W0507 22:37:55.650239 648662 out.go:424] no arguments passed for " - Generating certificates and keys ..." - returning raw string I0507 22:37:52.779300 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:55.596476 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:55.652950 648662 out.go:197] - Generating certificates and keys ... W0507 22:37:58.314998 648662 out.go:424] no arguments passed for " - Booting up control plane ..." - returning raw string W0507 22:37:58.315030 648662 out.go:424] no arguments passed for " - Booting up control plane ..." - returning raw string I0507 22:37:55.597295 649966 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20210507223739-391940:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir: (15.048208692s) I0507 22:37:55.597328 649966 kic.go:188] duration metric: took 15.048357 seconds to extract preloaded images to volume I0507 22:37:55.597424 649966 cli_runner.go:115] Run: docker container inspect custom-weave-20210507223739-391940 --format={{.State.Status}} I0507 22:37:55.641022 649966 machine.go:88] provisioning docker machine ... I0507 22:37:55.641067 649966 ubuntu.go:169] provisioning hostname "custom-weave-20210507223739-391940" I0507 22:37:55.641134 649966 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210507223739-391940 I0507 22:37:55.682773 649966 main.go:128] libmachine: Using SSH client type: native I0507 22:37:55.683032 649966 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 33306 } I0507 22:37:55.683055 649966 main.go:128] libmachine: About to run SSH command: sudo hostname custom-weave-20210507223739-391940 && echo "custom-weave-20210507223739-391940" | sudo tee /etc/hostname I0507 22:37:55.815554 649966 main.go:128] libmachine: SSH cmd err, output: : custom-weave-20210507223739-391940 I0507 22:37:55.815634 649966 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210507223739-391940 I0507 22:37:55.857512 649966 main.go:128] libmachine: Using SSH client type: native I0507 22:37:55.857704 649966 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 33306 } I0507 22:37:55.857735 649966 main.go:128] libmachine: About to run SSH command: if ! grep -xq '.*\scustom-weave-20210507223739-391940' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20210507223739-391940/g' /etc/hosts; else echo '127.0.1.1 custom-weave-20210507223739-391940' | sudo tee -a /etc/hosts; fi fi I0507 22:37:55.970891 649966 main.go:128] libmachine: SSH cmd err, output: : I0507 22:37:55.970928 649966 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube} I0507 22:37:55.970952 649966 ubuntu.go:177] setting up certificates I0507 22:37:55.970961 649966 provision.go:83] configureAuth start I0507 22:37:55.971028 649966 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20210507223739-391940 I0507 22:37:56.011329 649966 provision.go:137] copyHostCerts I0507 22:37:56.011391 649966 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem, removing ... I0507 22:37:56.011404 649966 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem I0507 22:37:56.011472 649966 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem (1078 bytes) I0507 22:37:56.011659 649966 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem, removing ... I0507 22:37:56.011678 649966 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem I0507 22:37:56.011709 649966 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem (1123 bytes) I0507 22:37:56.011768 649966 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem, removing ... I0507 22:37:56.011779 649966 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem I0507 22:37:56.011802 649966 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem (1675 bytes) I0507 22:37:56.011847 649966 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem org=jenkins.custom-weave-20210507223739-391940 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube custom-weave-20210507223739-391940] I0507 22:37:56.140452 649966 provision.go:165] copyRemoteCerts I0507 22:37:56.140501 649966 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0507 22:37:56.140540 649966 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210507223739-391940 I0507 22:37:56.180436 649966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/custom-weave-20210507223739-391940/id_rsa Username:docker} I0507 22:37:56.262375 649966 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes) I0507 22:37:56.278656 649966 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes) I0507 22:37:56.294253 649966 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0507 22:37:56.309796 649966 provision.go:86] duration metric: configureAuth took 338.816033ms I0507 22:37:56.309817 649966 ubuntu.go:193] setting minikube options for container-runtime I0507 22:37:56.309981 649966 machine.go:91] provisioned docker machine in 668.929973ms I0507 22:37:56.309996 649966 client.go:171] LocalClient.Create took 16.829858537s I0507 22:37:56.310023 649966 start.go:168] duration metric: libmachine.API.Create for "custom-weave-20210507223739-391940" took 16.829940484s I0507 22:37:56.310036 649966 start.go:267] post-start starting for "custom-weave-20210507223739-391940" (driver="docker") I0507 22:37:56.310043 649966 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0507 22:37:56.310090 649966 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0507 22:37:56.310138 649966 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210507223739-391940 I0507 22:37:56.351372 649966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/custom-weave-20210507223739-391940/id_rsa Username:docker} I0507 22:37:56.434450 649966 ssh_runner.go:149] Run: cat /etc/os-release I0507 22:37:56.437092 649966 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0507 22:37:56.437109 649966 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0507 22:37:56.437120 649966 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0507 22:37:56.437125 649966 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0507 22:37:56.437134 649966 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/addons for local assets ... I0507 22:37:56.437174 649966 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/files for local assets ... I0507 22:37:56.437261 649966 start.go:270] post-start completed in 127.219178ms I0507 22:37:56.437542 649966 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20210507223739-391940 I0507 22:37:56.477064 649966 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/config.json ... I0507 22:37:56.477267 649966 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0507 22:37:56.477311 649966 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210507223739-391940 I0507 22:37:56.515239 649966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/custom-weave-20210507223739-391940/id_rsa Username:docker} I0507 22:37:56.595624 649966 start.go:129] duration metric: createHost completed in 17.118993774s I0507 22:37:56.595650 649966 start.go:80] releasing machines lock for "custom-weave-20210507223739-391940", held for 17.119163061s I0507 22:37:56.595728 649966 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20210507223739-391940 I0507 22:37:56.635180 649966 ssh_runner.go:149] Run: systemctl --version I0507 22:37:56.635228 649966 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210507223739-391940 I0507 22:37:56.635240 649966 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/ I0507 22:37:56.635315 649966 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210507223739-391940 I0507 22:37:56.675444 649966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/custom-weave-20210507223739-391940/id_rsa Username:docker} I0507 22:37:56.681074 649966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33306 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/custom-weave-20210507223739-391940/id_rsa Username:docker} I0507 22:37:56.755277 649966 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio I0507 22:37:56.808515 649966 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker I0507 22:37:56.817919 649966 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket I0507 22:37:56.835078 649966 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service I0507 22:37:56.843794 649966 ssh_runner.go:149] Run: sudo systemctl disable docker.socket I0507 22:37:56.908033 649966 ssh_runner.go:149] Run: sudo systemctl mask docker.service I0507 22:37:56.969352 649966 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker I0507 22:37:56.978116 649966 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0507 22:37:56.990272 649966 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjIiCiAgICBzdGF0c19jb2xsZWN0X3BlcmlvZCA9IDEwCiAgICBzeXN0ZW1kX2Nncm91cCA9IGZhbHNlCiAgICBlbmFibGVfdGxzX3N0cmVhbWluZyA9IGZhbHNlCiAgICBtYXhfY29udGFpbmVyX2xvZ19saW5lX3NpemUgPSAxNjM4NAogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgbm9fcGl2b3QgPSB0cnVlCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW50aW1lLnYxLmxpbnV4IgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMubGludXhdCiAgICBzaGltID0gImNvbnRhaW5lcmQtc2hpbSIKICAgIHJ1bnRpbWUgPSAicnVuYyIKICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICBub19zaGltID0gZmFsc2UKICAgIHNoaW1fZGVidWcgPSBmYWxzZQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml" I0507 22:37:57.002427 649966 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0507 22:37:57.008579 649966 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255 stdout: stderr: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory I0507 22:37:57.008636 649966 ssh_runner.go:149] Run: sudo modprobe br_netfilter I0507 22:37:57.015288 649966 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0507 22:37:57.021224 649966 ssh_runner.go:149] Run: sudo systemctl daemon-reload I0507 22:37:57.077962 649966 ssh_runner.go:149] Run: sudo systemctl restart containerd I0507 22:37:57.152817 649966 start.go:368] Will wait 60s for socket path /run/containerd/containerd.sock I0507 22:37:57.152880 649966 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock I0507 22:37:57.156194 649966 start.go:393] Will wait 60s for crictl version I0507 22:37:57.156251 649966 ssh_runner.go:149] Run: sudo crictl version I0507 22:37:57.181354 649966 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1 stdout: stderr: time="2021-05-07T22:37:57Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet" I0507 22:37:57.634881 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:59.635199 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:58.317465 648662 out.go:197] - Booting up control plane ... I0507 22:38:02.135474 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:38:04.136331 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:38:08.229423 649966 ssh_runner.go:149] Run: sudo crictl version I0507 22:38:08.253047 649966 start.go:402] Version: 0.1.0 RuntimeName: containerd RuntimeVersion: 1.4.4 RuntimeApiVersion: v1alpha2 I0507 22:38:08.253111 649966 ssh_runner.go:149] Run: containerd --version I0507 22:38:08.278668 649966 out.go:170] * Preparing Kubernetes v1.20.2 on containerd 1.4.4 ... I0507 22:38:08.278761 649966 cli_runner.go:115] Run: docker network inspect custom-weave-20210507223739-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0507 22:38:08.317254 649966 ssh_runner.go:149] Run: grep 192.168.94.1 host.minikube.internal$ /etc/hosts I0507 22:38:08.320569 649966 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0507 22:38:08.329634 649966 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/client.crt I0507 22:38:08.329743 649966 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/client.key I0507 22:38:08.329875 649966 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd I0507 22:38:08.329907 649966 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 I0507 22:38:08.329968 649966 ssh_runner.go:149] Run: sudo crictl images --output json I0507 22:38:08.352618 649966 containerd.go:571] all images are preloaded for containerd runtime. I0507 22:38:08.352641 649966 containerd.go:481] Images already preloaded, skipping extraction I0507 22:38:08.352683 649966 ssh_runner.go:149] Run: sudo crictl images --output json I0507 22:38:08.377561 649966 containerd.go:571] all images are preloaded for containerd runtime. I0507 22:38:08.377585 649966 cache_images.go:74] Images are preloaded, skipping loading I0507 22:38:08.377624 649966 ssh_runner.go:149] Run: sudo crictl info I0507 22:38:08.399234 649966 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml" I0507 22:38:08.399267 649966 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0507 22:38:08.399300 649966 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20210507223739-391940 NodeName:custom-weave-20210507223739-391940 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0507 22:38:08.399452 649966 kubeadm.go:157] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.94.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /run/containerd/containerd.sock name: "custom-weave-20210507223739-391940" kubeletExtraArgs: node-ip: 192.168.94.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.94.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.20.2 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 I0507 22:38:08.399589 649966 kubeadm.go:901] kubelet [Unit] Wants=containerd.service [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=custom-weave-20210507223739-391940 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.94.2 --runtime-request-timeout=15m [Install] config: {KubernetesVersion:v1.20.2 ClusterName:custom-weave-20210507223739-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} I0507 22:38:08.399644 649966 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2 I0507 22:38:08.406091 649966 binaries.go:44] Found k8s binaries, skipping transfer I0507 22:38:08.406152 649966 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0507 22:38:08.412583 649966 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (548 bytes) I0507 22:38:08.424356 649966 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0507 22:38:08.436039 649966 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1873 bytes) I0507 22:38:08.447730 649966 ssh_runner.go:149] Run: grep 192.168.94.2 control-plane.minikube.internal$ /etc/hosts I0507 22:38:08.450440 649966 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0507 22:38:08.458913 649966 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940 for IP: 192.168.94.2 I0507 22:38:08.458953 649966 certs.go:171] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.key I0507 22:38:08.458971 649966 certs.go:171] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.key I0507 22:38:08.459096 649966 certs.go:282] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/client.key I0507 22:38:08.459130 649966 certs.go:286] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/apiserver.key.ad8e880a I0507 22:38:08.459142 649966 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/apiserver.crt.ad8e880a with IP's: [192.168.94.2 10.96.0.1 127.0.0.1 10.0.0.1] I0507 22:38:08.803325 649966 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/apiserver.crt.ad8e880a ... I0507 22:38:08.803356 649966 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/apiserver.crt.ad8e880a: {Name:mkbad62d45a8885d99e3f57103ce1f7b06002976 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:38:08.803588 649966 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/apiserver.key.ad8e880a ... I0507 22:38:08.803607 649966 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/apiserver.key.ad8e880a: {Name:mkf5e47174731a17b42eeb6fae1b075403eea1f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:38:08.803700 649966 certs.go:297] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/apiserver.crt.ad8e880a -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/apiserver.crt I0507 22:38:08.803758 649966 certs.go:301] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/apiserver.key.ad8e880a -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/apiserver.key I0507 22:38:08.803807 649966 certs.go:286] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/proxy-client.key I0507 22:38:08.803816 649966 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/proxy-client.crt with IP's: [] I0507 22:38:08.936583 649966 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/proxy-client.crt ... I0507 22:38:08.936617 649966 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/proxy-client.crt: {Name:mkd274848de2ee6666014e1efff15397fc4fe11f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:38:08.936781 649966 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/proxy-client.key ... I0507 22:38:08.936795 649966 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/proxy-client.key: {Name:mk1fc3105bb707469fe1045c3993e310a86072a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:38:08.936965 649966 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940.pem (1338 bytes) W0507 22:38:08.937005 649966 certs.go:357] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940_empty.pem, impossibly tiny 0 bytes I0507 22:38:08.937023 649966 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem (1679 bytes) I0507 22:38:08.937048 649966 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem (1078 bytes) I0507 22:38:08.937072 649966 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem (1123 bytes) I0507 22:38:08.937095 649966 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/key.pem (1675 bytes) I0507 22:38:08.938035 649966 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0507 22:38:09.032080 649966 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0507 22:38:09.052185 649966 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0507 22:38:09.068683 649966 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0507 22:38:09.091023 649966 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0507 22:38:09.109134 649966 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0507 22:38:09.129186 649966 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0507 22:38:09.146178 649966 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0507 22:38:09.164013 649966 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940.pem --> /usr/share/ca-certificates/391940.pem (1338 bytes) I0507 22:38:09.182624 649966 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0507 22:38:09.203842 649966 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0507 22:38:09.221924 649966 ssh_runner.go:149] Run: openssl version I0507 22:38:09.226659 649966 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391940.pem && ln -fs /usr/share/ca-certificates/391940.pem /etc/ssl/certs/391940.pem" I0507 22:38:09.234783 649966 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/391940.pem I0507 22:38:09.240696 649966 certs.go:402] hashing: -rw-r--r-- 1 root root 1338 May 7 21:57 /usr/share/ca-certificates/391940.pem I0507 22:38:09.240752 649966 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391940.pem I0507 22:38:09.247142 649966 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391940.pem /etc/ssl/certs/51391683.0" I0507 22:38:09.257850 649966 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0507 22:38:09.266472 649966 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0507 22:38:09.269726 649966 certs.go:402] hashing: -rw-r--r-- 1 root root 1111 May 7 21:50 /usr/share/ca-certificates/minikubeCA.pem I0507 22:38:09.269781 649966 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0507 22:38:09.276293 649966 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0507 22:38:09.283842 649966 kubeadm.go:381] StartCluster: {Name:custom-weave-20210507223739-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:custom-weave-20210507223739-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0507 22:38:09.283947 649966 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]} I0507 22:38:09.284030 649966 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system" I0507 22:38:09.310725 649966 cri.go:76] found id: "" I0507 22:38:09.310798 649966 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0507 22:38:09.317878 649966 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0507 22:38:09.324857 649966 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver I0507 22:38:09.324900 649966 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0507 22:38:09.331387 649966 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0507 22:38:09.331419 649966 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" W0507 22:38:09.659163 649966 out.go:424] no arguments passed for " - Generating certificates and keys ..." - returning raw string W0507 22:38:09.659197 649966 out.go:424] no arguments passed for " - Generating certificates and keys ..." - returning raw string * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID e8dcaee939c9b 96572c6d9c9cb 2 minutes ago Running dnsutils 0 efe98879ab4d2 eacb2a2ddf148 bfe3a36ebd252 4 minutes ago Running coredns 0 97e96e1769a34 775eafe9c39b9 6e38f40d628db 4 minutes ago Running storage-provisioner 0 a3603b709e707 4742a4083bcb9 6de166512aa22 4 minutes ago Running kindnet-cni 0 867d443952519 b0ab7db293a15 43154ddb57a83 4 minutes ago Running kube-proxy 0 424956cf1bceb 3e47671ee59bd ed2c44fbdd78b 4 minutes ago Running kube-scheduler 0 971a9d7a6913b ea2a02ae7e02b 0369cf4303ffd 4 minutes ago Running etcd 0 5f57b82a853a7 fb0ff3e842e4d a8c2fdb8bf76e 4 minutes ago Running kube-apiserver 0 7fe87bfc1741c 00e4bddea4a7b a27166429d98e 4 minutes ago Running kube-controller-manager 0 b62872c61716a * * ==> containerd <== * -- Logs begin at Fri 2021-05-07 22:32:52 UTC, end at Fri 2021-05-07 22:38:10 UTC. -- May 07 22:35:20 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:35:20.664276953Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:96572c6d9c9cb0069d1f4305c79ccd84039ce6b44abf1f1dc7f54671ad3c5467,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 07 22:35:20 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:35:20.666294330Z" level=info msg="ImageUpdate event &ImageUpdate{Name:gcr.io/kubernetes-e2e-test-images/dnsutils:1.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 07 22:35:20 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:35:20.667936530Z" level=info msg="ImageCreate event &ImageCreate{Name:gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:b31bcf7ef4420ce7108e7fc10b6c00343b21257c945eec94c21598e72a8f2de0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 07 22:35:20 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:35:20.668271625Z" level=info msg="PullImage \"gcr.io/kubernetes-e2e-test-images/dnsutils:1.3\" returns image reference \"sha256:96572c6d9c9cb0069d1f4305c79ccd84039ce6b44abf1f1dc7f54671ad3c5467\"" May 07 22:35:20 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:35:20.669842204Z" level=info msg="CreateContainer within sandbox \"efe98879ab4d26c2734f876193fc93b65634b84d3aebd34ee89abdc6f4154ceb\" for container &ContainerMetadata{Name:dnsutils,Attempt:0,}" May 07 22:35:20 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:35:20.719965259Z" level=info msg="CreateContainer within sandbox \"efe98879ab4d26c2734f876193fc93b65634b84d3aebd34ee89abdc6f4154ceb\" for &ContainerMetadata{Name:dnsutils,Attempt:0,} returns container id \"e8dcaee939c9b7d938eb58b1c770b9c38526010e54872c402aca38a733d26ef7\"" May 07 22:35:20 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:35:20.720484835Z" level=info msg="StartContainer for \"e8dcaee939c9b7d938eb58b1c770b9c38526010e54872c402aca38a733d26ef7\"" May 07 22:35:20 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:35:20.721012923Z" level=warning msg="runtime v1 is deprecated since containerd v1.4, consider using runtime v2" May 07 22:35:20 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:35:20.721803226Z" level=info msg="shim containerd-shim started" address="unix:///run/containerd/s/01212572b25144999906a01f4a404b7921bf09b65d5fc9eb5a2a041408dc40f8" debug=false pid=3106 May 07 22:35:20 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:35:20.877468491Z" level=info msg="StartContainer for \"e8dcaee939c9b7d938eb58b1c770b9c38526010e54872c402aca38a733d26ef7\" returns successfully" May 07 22:38:09 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:38:09.107886584Z" level=info msg="Exec for \"e8dcaee939c9b7d938eb58b1c770b9c38526010e54872c402aca38a733d26ef7\" with command [nslookup kubernetes.default], tty false and stdin false" May 07 22:38:09 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:38:09.107964434Z" level=info msg="Exec for \"e8dcaee939c9b7d938eb58b1c770b9c38526010e54872c402aca38a733d26ef7\" returns URL \"http://192.168.58.2:10010/exec/e5HmPkbe\"" May 07 22:38:09 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:38:09.160483888Z" level=info msg="Finish piping \"stderr\" of container exec \"ea5a23b9f4456ff92f6a9c4043acd2774da40172597a3b1cd146a0207fcc148f\"" May 07 22:38:09 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:38:09.160600514Z" level=info msg="Finish piping \"stdout\" of container exec \"ea5a23b9f4456ff92f6a9c4043acd2774da40172597a3b1cd146a0207fcc148f\"" May 07 22:38:09 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:38:09.162491745Z" level=info msg="Exec process \"ea5a23b9f4456ff92f6a9c4043acd2774da40172597a3b1cd146a0207fcc148f\" exits with exit code 0 and error " May 07 22:38:09 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:38:09.295915871Z" level=info msg="Exec for \"e8dcaee939c9b7d938eb58b1c770b9c38526010e54872c402aca38a733d26ef7\" with command [/bin/sh -c nc -w 5 -i 5 -z localhost 8080], tty false and stdin false" May 07 22:38:09 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:38:09.296001265Z" level=info msg="Exec for \"e8dcaee939c9b7d938eb58b1c770b9c38526010e54872c402aca38a733d26ef7\" returns URL \"http://192.168.58.2:10010/exec/VH0Tf8rF\"" May 07 22:38:09 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:38:09.354133854Z" level=info msg="Exec process \"6632f9fa94e6f34c5369ff67ec9c8c0d298556967768fd9b1c6debe9aabe9f92\" exits with exit code 0 and error " May 07 22:38:09 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:38:09.354568369Z" level=info msg="Finish piping \"stdout\" of container exec \"6632f9fa94e6f34c5369ff67ec9c8c0d298556967768fd9b1c6debe9aabe9f92\"" May 07 22:38:09 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:38:09.354874807Z" level=info msg="Finish piping \"stderr\" of container exec \"6632f9fa94e6f34c5369ff67ec9c8c0d298556967768fd9b1c6debe9aabe9f92\"" May 07 22:38:09 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:38:09.464411703Z" level=info msg="Exec for \"e8dcaee939c9b7d938eb58b1c770b9c38526010e54872c402aca38a733d26ef7\" with command [/bin/sh -c nc -w 5 -i 5 -z netcat 8080], tty false and stdin false" May 07 22:38:09 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:38:09.464486232Z" level=info msg="Exec for \"e8dcaee939c9b7d938eb58b1c770b9c38526010e54872c402aca38a733d26ef7\" returns URL \"http://192.168.58.2:10010/exec/y7y7sOPY\"" May 07 22:38:09 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:38:09.527186964Z" level=info msg="Finish piping \"stdout\" of container exec \"ee6ac94d791fab11e531ba2ed59d5b7ab93b84d8fb860611de6fc11c7d6f043d\"" May 07 22:38:09 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:38:09.527407880Z" level=info msg="Exec process \"ee6ac94d791fab11e531ba2ed59d5b7ab93b84d8fb860611de6fc11c7d6f043d\" exits with exit code 0 and error " May 07 22:38:09 auto-20210507223250-391940 containerd[458]: time="2021-05-07T22:38:09.527294449Z" level=info msg="Finish piping \"stderr\" of container exec \"ee6ac94d791fab11e531ba2ed59d5b7ab93b84d8fb860611de6fc11c7d6f043d\"" * * ==> coredns [eacb2a2ddf1481f03b856e438b523250740bb236c5bf583a979278607227086b] <== * .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.7.0 linux/amd64, go1.14.4, f59c03d * * ==> describe nodes <== * Name: auto-20210507223250-391940 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=auto-20210507223250-391940 kubernetes.io/os=linux minikube.k8s.io/commit=c31bd57f93d45726e4bd30607374f8c720e70e95 minikube.k8s.io/name=auto-20210507223250-391940 minikube.k8s.io/updated_at=2021_05_07T22_33_32_0700 minikube.k8s.io/version=v1.20.0 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Fri, 07 May 2021 22:33:22 +0000 Taints: Unschedulable: false Lease: HolderIdentity: auto-20210507223250-391940 AcquireTime: RenewTime: Fri, 07 May 2021 22:38:09 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Fri, 07 May 2021 22:35:37 +0000 Fri, 07 May 2021 22:33:19 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Fri, 07 May 2021 22:35:37 +0000 Fri, 07 May 2021 22:33:19 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Fri, 07 May 2021 22:35:37 +0000 Fri, 07 May 2021 22:33:19 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Fri, 07 May 2021 22:35:37 +0000 Fri, 07 May 2021 22:33:57 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.58.2 Hostname: auto-20210507223250-391940 Capacity: cpu: 8 ephemeral-storage: 309568300Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32951376Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 309568300Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32951376Ki pods: 110 System Info: Machine ID: 822f5ed6656e44929f6c2cc5d6881453 System UUID: eca1a496-3545-47ae-8eb2-18ea6fdab3ff Boot ID: a4d5e757-68dd-498f-8a27-b6d8b368f45c Kernel Version: 4.9.0-15-amd64 OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: containerd://1.4.4 Kubelet Version: v1.20.2 Kube-Proxy Version: v1.20.2 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (9 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- default netcat-66fbc655d5-pf5zj 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m52s kube-system coredns-74ff55c5b-bb2lr 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 4m23s kube-system etcd-auto-20210507223250-391940 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 4m33s kube-system kindnet-vxx2n 100m (1%!)(MISSING) 100m (1%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 4m23s kube-system kube-apiserver-auto-20210507223250-391940 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m33s kube-system kube-controller-manager-auto-20210507223250-391940 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m32s kube-system kube-proxy-qppjx 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m23s kube-system kube-scheduler-auto-20210507223250-391940 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m33s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m22s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 850m (10%!)(MISSING) 100m (1%!)(MISSING) memory 220Mi (0%!)(MISSING) 220Mi (0%!)(MISSING) ephemeral-storage 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientMemory 4m53s (x5 over 4m53s) kubelet Node auto-20210507223250-391940 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 4m53s (x5 over 4m53s) kubelet Node auto-20210507223250-391940 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 4m53s (x3 over 4m53s) kubelet Node auto-20210507223250-391940 status is now: NodeHasSufficientPID Normal Starting 4m33s kubelet Starting kubelet. Normal NodeHasSufficientMemory 4m33s kubelet Node auto-20210507223250-391940 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 4m33s kubelet Node auto-20210507223250-391940 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 4m33s kubelet Node auto-20210507223250-391940 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 4m33s kubelet Updated Node Allocatable limit across pods Normal Starting 4m23s kube-proxy Starting kube-proxy. Normal NodeReady 4m13s kubelet Node auto-20210507223250-391940 status is now: NodeReady * * ==> dmesg <== * [ +6.448163] cgroup: cgroup2: unknown option "nsdelegate" [May 7 22:35] IPv4: martian source 10.85.0.4 from 10.85.0.4, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff 6a eb cb 4d ac a0 08 06 ......j..M.... [ +5.712366] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethfbb54055 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff e6 f6 e6 0d 08 88 08 06 .............. [ +15.298858] IPv4: martian source 10.85.0.5 from 10.85.0.5, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff e2 b1 14 d5 a9 07 08 06 .............. [ +20.989013] IPv4: martian source 10.85.0.6 from 10.85.0.6, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff be c1 9e 70 fd 3a 08 06 .........p.:.. [May 7 22:36] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff 06 2c 05 fc f1 62 08 06 .......,...b.. [ +0.000005] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0 [ +0.000001] ll header: 00000000: ff ff ff ff ff ff 06 2c 05 fc f1 62 08 06 .......,...b.. [ +10.550992] IPv4: martian source 10.85.0.7 from 10.85.0.7, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff be 10 de e4 ef 02 08 06 .............. [ +21.035864] IPv4: martian source 10.85.0.8 from 10.85.0.8, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff 7a 8f 71 29 dc 58 08 06 ......z.q).X.. [ +22.989971] IPv4: martian source 10.85.0.9 from 10.85.0.9, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff e6 a0 f4 3f f7 a3 08 06 .........?.... [May 7 22:37] IPv4: martian source 10.85.0.10 from 10.85.0.10, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff be 58 cc 1f 67 c4 08 06 .......X..g... [ +10.062800] cgroup: cgroup2: unknown option "nsdelegate" [ +6.039039] cgroup: cgroup2: unknown option "nsdelegate" [ +6.920502] IPv4: martian source 10.85.0.11 from 10.85.0.11, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff 02 0e 99 3d 4d eb 08 06 .........=M... * * ==> etcd [ea2a02ae7e02b5c9d65b05414dc6642bf40ada3e05579ca98be086418a612605] <== * 2021-05-07 22:37:19.681824 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:37:29.681869 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:37:39.682068 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:37:47.545047 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.770073229s) to execute 2021-05-07 22:37:47.545166 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1124" took too long (1.72688288s) to execute 2021-05-07 22:37:47.778718 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (220.471753ms) to execute 2021-05-07 22:37:50.681865 W | etcdserver/api/etcdhttp: /health error; QGET failed etcdserver: request timed out (status code 503) 2021-05-07 22:37:51.218210 W | wal: sync duration of 1.928181013s, expected less than 1s 2021-05-07 22:37:51.416500 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1124" took too long (1.853182609s) to execute 2021-05-07 22:37:51.416584 W | etcdserver: request "header: lease_grant:" with result "size:41" took too long (198.111842ms) to execute 2021-05-07 22:37:51.416690 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.642017248s) to execute 2021-05-07 22:37:51.416743 W | etcdserver: read-only range request "key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true " with result "range_response_count:0 size:5" took too long (1.401067343s) to execute 2021-05-07 22:37:51.416764 W | etcdserver: read-only range request "key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true " with result "range_response_count:0 size:7" took too long (1.178023594s) to execute 2021-05-07 22:37:51.416828 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (851.364716ms) to execute 2021-05-07 22:37:52.775857 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:644" took too long (1.352241579s) to execute 2021-05-07 22:37:52.776098 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:16" took too long (856.739359ms) to execute 2021-05-07 22:37:52.776237 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.347891753s) to execute 2021-05-07 22:37:52.776307 W | etcdserver: read-only range request "key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true " with result "range_response_count:0 size:5" took too long (942.80348ms) to execute 2021-05-07 22:37:53.322738 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (538.734431ms) to execute 2021-05-07 22:37:55.400253 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.627122641s) to execute 2021-05-07 22:37:55.400293 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:6074" took too long (1.689552783s) to execute 2021-05-07 22:37:55.400376 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (136.502397ms) to execute 2021-05-07 22:37:55.400496 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1124" took too long (616.659642ms) to execute 2021-05-07 22:37:59.681919 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:38:09.681844 I | etcdserver/api/etcdhttp: /health OK (status code 200) * * ==> kernel <== * 22:38:10 up 3:17, 0 users, load average: 2.93, 2.19, 2.23 Linux auto-20210507223250-391940 4.9.0-15-amd64 #1 SMP Debian 4.9.258-1 (2021-03-08) x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [fb0ff3e842e4d24a25990c7268e7d46ba312e63b09d16ea1314c75ad31a83666] <== * Trace[1546925961]: ---"Object stored in database" 734ms (22:37:00.420) Trace[1546925961]: [734.777068ms] [734.777068ms] END I0507 22:37:52.776608 1 trace.go:205] Trace[1731633298]: "Get" url:/api/v1/namespaces/default/services/kubernetes,user-agent:kube-apiserver/v1.20.2 (linux/amd64) kubernetes/faecb19,client:127.0.0.1 (07-May-2021 22:37:51.423) (total time: 1353ms): Trace[1731633298]: ---"About to write a response" 1353ms (22:37:00.776) Trace[1731633298]: [1.353412661s] [1.353412661s] END I0507 22:37:52.777104 1 trace.go:205] Trace[1991570143]: "GuaranteedUpdate etcd3" type:*core.Endpoints (07-May-2021 22:37:51.423) (total time: 1353ms): Trace[1991570143]: ---"Transaction committed" 1352ms (22:37:00.776) Trace[1991570143]: [1.353442252s] [1.353442252s] END I0507 22:37:52.777337 1 trace.go:205] Trace[2028283651]: "Update" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.58.2 (07-May-2021 22:37:51.423) (total time: 1354ms): Trace[2028283651]: ---"Object stored in database" 1353ms (22:37:00.777) Trace[2028283651]: [1.35416316s] [1.35416316s] END I0507 22:37:53.323154 1 trace.go:205] Trace[986739973]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (07-May-2021 22:37:52.777) (total time: 546ms): Trace[986739973]: ---"Transaction committed" 544ms (22:37:00.323) Trace[986739973]: [546.101144ms] [546.101144ms] END I0507 22:37:55.400950 1 trace.go:205] Trace[2144018054]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.58.2 (07-May-2021 22:37:54.783) (total time: 617ms): Trace[2144018054]: ---"About to write a response" 617ms (22:37:00.400) Trace[2144018054]: [617.882022ms] [617.882022ms] END I0507 22:37:55.400980 1 trace.go:205] Trace[343612691]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (07-May-2021 22:37:53.710) (total time: 1690ms): Trace[343612691]: [1.69077776s] [1.69077776s] END I0507 22:37:55.401345 1 trace.go:205] Trace[204732670]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.58.2 (07-May-2021 22:37:53.710) (total time: 1691ms): Trace[204732670]: ---"Listing from storage done" 1690ms (22:37:00.400) Trace[204732670]: [1.691159139s] [1.691159139s] END I0507 22:37:55.719844 1 client.go:360] parsed scheme: "passthrough" I0507 22:37:55.719887 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0507 22:37:55.719895 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * * ==> kube-controller-manager [00e4bddea4a7bb365bc32440cda0c68929d174a0a5899b2e12da531acaaf7de7] <== * I0507 22:33:47.156139 1 range_allocator.go:373] Set node auto-20210507223250-391940 PodCIDR to [10.244.0.0/24] I0507 22:33:47.160396 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2" I0507 22:33:47.241378 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-vmzcw" E0507 22:33:47.246866 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again I0507 22:33:47.248166 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-bb2lr" E0507 22:33:47.248792 1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"6039aca4-36fe-4047-ab09-e18f9c190e1b", ResourceVersion:"284", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63756023612, loc:(*time.Location)(0x6f31360)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.mk\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001bcae00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001bcae20)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001bcae40), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001bcae60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001bcae80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001bcaea0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001bcaec0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001bcaf00)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000a88de0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0014acfc8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000570d20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00011a348)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0014ad020)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again E0507 22:33:47.248803 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"a58134da-a155-46d1-ae14-fcd45de47ea7", ResourceVersion:"270", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63756023612, loc:(*time.Location)(0x6f31360)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001bcace0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001bcad00)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001bcad20), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0017c6c80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001bcad40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001bcad60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001bcada0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000a88d80), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0014acd68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000570690), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00011a340)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0014acdc8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again I0507 22:33:47.332179 1 shared_informer.go:247] Caches are synced for endpoint I0507 22:33:47.332708 1 shared_informer.go:247] Caches are synced for resource quota I0507 22:33:47.341245 1 shared_informer.go:247] Caches are synced for expand I0507 22:33:47.348896 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0507 22:33:47.349256 1 shared_informer.go:247] Caches are synced for resource quota I0507 22:33:47.349702 1 shared_informer.go:247] Caches are synced for persistent volume I0507 22:33:47.355729 1 shared_informer.go:247] Caches are synced for attach detach I0507 22:33:47.399350 1 shared_informer.go:247] Caches are synced for PV protection I0507 22:33:47.503750 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0507 22:33:47.644068 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1" I0507 22:33:47.652354 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-vmzcw" I0507 22:33:47.799147 1 shared_informer.go:247] Caches are synced for garbage collector I0507 22:33:47.799172 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0507 22:33:47.803946 1 shared_informer.go:247] Caches are synced for garbage collector I0507 22:34:02.092717 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode. I0507 22:35:18.617820 1 event.go:291] "Event occurred" object="default/netcat" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set netcat-66fbc655d5 to 1" I0507 22:35:18.624778 1 event.go:291] "Event occurred" object="default/netcat-66fbc655d5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: netcat-66fbc655d5-pf5zj" I0507 22:35:18.636431 1 event.go:291] "Event occurred" object="netcat" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service default/netcat: endpoints \"netcat\" already exists" * * ==> kube-proxy [b0ab7db293a15434a96f8337f7c9972db4464f9b33d897ededdc80efd8d332ac] <== * I0507 22:33:47.947336 1 node.go:172] Successfully retrieved node IP: 192.168.58.2 I0507 22:33:47.947425 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.58.2), assume IPv4 operation W0507 22:33:47.960773 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy I0507 22:33:47.960852 1 server_others.go:185] Using iptables Proxier. I0507 22:33:47.961080 1 server.go:650] Version: v1.20.2 I0507 22:33:47.961496 1 conntrack.go:52] Setting nf_conntrack_max to 262144 I0507 22:33:47.961601 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I0507 22:33:47.961659 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I0507 22:33:47.961834 1 config.go:315] Starting service config controller I0507 22:33:47.961845 1 shared_informer.go:240] Waiting for caches to sync for service config I0507 22:33:47.961952 1 config.go:224] Starting endpoint slice config controller I0507 22:33:47.962006 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0507 22:33:48.062003 1 shared_informer.go:247] Caches are synced for service config I0507 22:33:48.062057 1 shared_informer.go:247] Caches are synced for endpoint slice config * * ==> kube-scheduler [3e47671ee59bd4504d282cf9862ecb1adaa396100b4334978b5061244a845022] <== * E0507 22:33:23.900734 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0507 22:33:23.901236 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0507 22:33:23.994986 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0507 22:33:24.055980 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0507 22:33:24.165769 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0507 22:33:24.176492 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0507 22:33:24.218951 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0507 22:33:24.346354 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0507 22:33:25.269450 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0507 22:33:25.430277 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0507 22:33:25.705660 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0507 22:33:25.763787 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0507 22:33:25.772210 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0507 22:33:25.790261 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0507 22:33:26.583607 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0507 22:33:26.814825 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0507 22:33:26.966180 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0507 22:33:26.980653 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0507 22:33:27.058766 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0507 22:33:27.177627 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0507 22:33:29.449920 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0507 22:33:29.697768 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0507 22:33:29.983428 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0507 22:33:30.233451 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope I0507 22:33:30.861809 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Fri 2021-05-07 22:32:52 UTC, end at Fri 2021-05-07 22:38:10 UTC. -- May 07 22:33:38 auto-20210507223250-391940 kubelet[1186]: I0507 22:33:38.061305 1186 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-etc-ca-certificates") pod "kube-controller-manager-auto-20210507223250-391940" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7") May 07 22:33:38 auto-20210507223250-391940 kubelet[1186]: I0507 22:33:38.061318 1186 reconciler.go:157] Reconciler: start to sync state May 07 22:33:42 auto-20210507223250-391940 kubelet[1186]: E0507 22:33:42.765945 1186 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized May 07 22:33:47 auto-20210507223250-391940 kubelet[1186]: I0507 22:33:47.158945 1186 topology_manager.go:187] [topologymanager] Topology Admit Handler May 07 22:33:47 auto-20210507223250-391940 kubelet[1186]: I0507 22:33:47.159158 1186 topology_manager.go:187] [topologymanager] Topology Admit Handler May 07 22:33:47 auto-20210507223250-391940 kubelet[1186]: I0507 22:33:47.232570 1186 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.244.0.0/24 May 07 22:33:47 auto-20210507223250-391940 kubelet[1186]: I0507 22:33:47.233180 1186 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/bc273d36-20c1-4ea1-908e-e09f6965a67b-xtables-lock") pod "kube-proxy-qppjx" (UID: "bc273d36-20c1-4ea1-908e-e09f6965a67b") May 07 22:33:47 auto-20210507223250-391940 kubelet[1186]: I0507 22:33:47.233228 1186 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/bc273d36-20c1-4ea1-908e-e09f6965a67b-lib-modules") pod "kube-proxy-qppjx" (UID: "bc273d36-20c1-4ea1-908e-e09f6965a67b") May 07 22:33:47 auto-20210507223250-391940 kubelet[1186]: I0507 22:33:47.233259 1186 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-blppp" (UniqueName: "kubernetes.io/secret/bc273d36-20c1-4ea1-908e-e09f6965a67b-kube-proxy-token-blppp") pod "kube-proxy-qppjx" (UID: "bc273d36-20c1-4ea1-908e-e09f6965a67b") May 07 22:33:47 auto-20210507223250-391940 kubelet[1186]: I0507 22:33:47.233283 1186 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/a5bcf20b-84e8-4fca-88dd-4640c11d4415-xtables-lock") pod "kindnet-vxx2n" (UID: "a5bcf20b-84e8-4fca-88dd-4640c11d4415") May 07 22:33:47 auto-20210507223250-391940 kubelet[1186]: I0507 22:33:47.233304 1186 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/bc273d36-20c1-4ea1-908e-e09f6965a67b-kube-proxy") pod "kube-proxy-qppjx" (UID: "bc273d36-20c1-4ea1-908e-e09f6965a67b") May 07 22:33:47 auto-20210507223250-391940 kubelet[1186]: I0507 22:33:47.233349 1186 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/a5bcf20b-84e8-4fca-88dd-4640c11d4415-cni-cfg") pod "kindnet-vxx2n" (UID: "a5bcf20b-84e8-4fca-88dd-4640c11d4415") May 07 22:33:47 auto-20210507223250-391940 kubelet[1186]: I0507 22:33:47.233392 1186 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/a5bcf20b-84e8-4fca-88dd-4640c11d4415-lib-modules") pod "kindnet-vxx2n" (UID: "a5bcf20b-84e8-4fca-88dd-4640c11d4415") May 07 22:33:47 auto-20210507223250-391940 kubelet[1186]: I0507 22:33:47.233430 1186 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-l5wfh" (UniqueName: "kubernetes.io/secret/a5bcf20b-84e8-4fca-88dd-4640c11d4415-kindnet-token-l5wfh") pod "kindnet-vxx2n" (UID: "a5bcf20b-84e8-4fca-88dd-4640c11d4415") May 07 22:33:47 auto-20210507223250-391940 kubelet[1186]: I0507 22:33:47.233593 1186 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.0.0/24 May 07 22:33:47 auto-20210507223250-391940 kubelet[1186]: E0507 22:33:47.234002 1186 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized May 07 22:33:47 auto-20210507223250-391940 kubelet[1186]: E0507 22:33:47.766508 1186 kubelet.go:2163] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized May 07 22:34:01 auto-20210507223250-391940 kubelet[1186]: I0507 22:34:01.844244 1186 topology_manager.go:187] [topologymanager] Topology Admit Handler May 07 22:34:01 auto-20210507223250-391940 kubelet[1186]: I0507 22:34:01.848425 1186 topology_manager.go:187] [topologymanager] Topology Admit Handler May 07 22:34:01 auto-20210507223250-391940 kubelet[1186]: I0507 22:34:01.961135 1186 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/358d70de-7a3a-4f75-ba19-e96e4a4109e6-tmp") pod "storage-provisioner" (UID: "358d70de-7a3a-4f75-ba19-e96e4a4109e6") May 07 22:34:01 auto-20210507223250-391940 kubelet[1186]: I0507 22:34:01.961181 1186 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-w5p7x" (UniqueName: "kubernetes.io/secret/63279575-299c-49e2-993b-60eb06751f8c-coredns-token-w5p7x") pod "coredns-74ff55c5b-bb2lr" (UID: "63279575-299c-49e2-993b-60eb06751f8c") May 07 22:34:01 auto-20210507223250-391940 kubelet[1186]: I0507 22:34:01.961286 1186 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/63279575-299c-49e2-993b-60eb06751f8c-config-volume") pod "coredns-74ff55c5b-bb2lr" (UID: "63279575-299c-49e2-993b-60eb06751f8c") May 07 22:34:01 auto-20210507223250-391940 kubelet[1186]: I0507 22:34:01.961329 1186 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-7mpt9" (UniqueName: "kubernetes.io/secret/358d70de-7a3a-4f75-ba19-e96e4a4109e6-storage-provisioner-token-7mpt9") pod "storage-provisioner" (UID: "358d70de-7a3a-4f75-ba19-e96e4a4109e6") May 07 22:35:18 auto-20210507223250-391940 kubelet[1186]: I0507 22:35:18.629131 1186 topology_manager.go:187] [topologymanager] Topology Admit Handler May 07 22:35:18 auto-20210507223250-391940 kubelet[1186]: I0507 22:35:18.807945 1186 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-nvpc4" (UniqueName: "kubernetes.io/secret/5f4d13ec-95dd-4fdf-bf14-6173d8bbb162-default-token-nvpc4") pod "netcat-66fbc655d5-pf5zj" (UID: "5f4d13ec-95dd-4fdf-bf14-6173d8bbb162") * * ==> storage-provisioner [775eafe9c39b9c83b8edea78d73bea0fae97d05ea87625bacf6c6147e4646ff8] <== * I0507 22:34:02.502951 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0507 22:34:02.511810 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0507 22:34:02.511871 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0507 22:34:02.528985 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0507 22:34:02.529078 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"732c1b46-f958-4da2-ae50-f8bdd6800147", APIVersion:"v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' auto-20210507223250-391940_19e1b62c-4a72-400f-a056-ba32331c5eaf became leader I0507 22:34:02.529143 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_auto-20210507223250-391940_19e1b62c-4a72-400f-a056-ba32331c5eaf! I0507 22:34:02.630089 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_auto-20210507223250-391940_19e1b62c-4a72-400f-a056-ba32331c5eaf! -- /stdout -- helpers_test.go:250: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p auto-20210507223250-391940 -n auto-20210507223250-391940 helpers_test.go:257: (dbg) Run: kubectl --context auto-20210507223250-391940 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running helpers_test.go:263: non-running pods: helpers_test.go:265: ======> post-mortem[TestNetworkPlugins/group/auto]: describe non-running pods <====== helpers_test.go:268: (dbg) Run: kubectl --context auto-20210507223250-391940 describe pod helpers_test.go:268: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 describe pod : exit status 1 (50.218669ms) ** stderr ** error: resource name may not be empty ** /stderr ** helpers_test.go:270: kubectl --context auto-20210507223250-391940 describe pod : exit status 1 helpers_test.go:171: Cleaning up "auto-20210507223250-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p auto-20210507223250-391940 helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p auto-20210507223250-391940: (3.184611071s) === CONT TestNetworkPlugins/group/enable-default-cni === RUN TestNetworkPlugins/group/enable-default-cni/Start net_test.go:83: (dbg) Run: out/minikube-linux-amd64 start -p enable-default-cni-20210507223814-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker --container-runtime=containerd E0507 22:38:17.776884 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory E0507 22:38:31.347016 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory E0507 22:39:53.267873 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory === CONT TestNetworkPlugins/group/calico/Start net_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p calico-20210507223733-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker --container-runtime=containerd: (2m25.495958291s) === RUN TestNetworkPlugins/group/calico/ControllerPod net_test.go:91: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ... helpers_test.go:335: "calico-node-974cx" [4a5d76fa-338c-48af-b63e-5aed723b340a] Running E0507 22:40:02.457788 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory net_test.go:91: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.013224565s === RUN TestNetworkPlugins/group/calico/KubeletFlags net_test.go:99: (dbg) Run: out/minikube-linux-amd64 ssh -p calico-20210507223733-391940 "pgrep -a kubelet" === RUN TestNetworkPlugins/group/calico/NetCatPod net_test.go:113: (dbg) Run: kubectl --context calico-20210507223733-391940 replace --force -f testdata/netcat-deployment.yaml net_test.go:127: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ... helpers_test.go:335: "netcat-66fbc655d5-7mkfm" [1f41ae41-a57c-47ce-9f53-9619f2c92a69] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils]) helpers_test.go:335: "netcat-66fbc655d5-7mkfm" [1f41ae41-a57c-47ce-9f53-9619f2c92a69] Running === CONT TestNetworkPlugins/group/custom-weave/Start net_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20210507223739-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker --container-runtime=containerd: (2m32.840035122s) === RUN TestNetworkPlugins/group/custom-weave/KubeletFlags net_test.go:99: (dbg) Run: out/minikube-linux-amd64 ssh -p custom-weave-20210507223739-391940 "pgrep -a kubelet" === RUN TestNetworkPlugins/group/custom-weave/NetCatPod net_test.go:113: (dbg) Run: kubectl --context custom-weave-20210507223739-391940 replace --force -f testdata/netcat-deployment.yaml net_test.go:127: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ... helpers_test.go:335: "netcat-66fbc655d5-7pqvk" [631065b7-2f0e-407c-ab45-42ea8e1480e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils]) === CONT TestNetworkPlugins/group/calico/NetCatPod net_test.go:127: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.006166247s === RUN TestNetworkPlugins/group/calico/DNS net_test.go:144: (dbg) Run: kubectl --context calico-20210507223733-391940 exec deployment/netcat -- nslookup kubernetes.default === RUN TestNetworkPlugins/group/calico/Localhost net_test.go:163: (dbg) Run: kubectl --context calico-20210507223733-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080" === RUN TestNetworkPlugins/group/calico/HairPin net_test.go:176: (dbg) Run: kubectl --context calico-20210507223733-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080" === CONT TestNetworkPlugins/group/calico net_test.go:192: "calico" test finished in 19m39.587437955s, failed=false helpers_test.go:171: Cleaning up "calico-20210507223733-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p calico-20210507223733-391940 === CONT TestNetworkPlugins/group/custom-weave/NetCatPod helpers_test.go:335: "netcat-66fbc655d5-7pqvk" [631065b7-2f0e-407c-ab45-42ea8e1480e3] Running === CONT TestNetworkPlugins/group/calico helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p calico-20210507223733-391940: (3.481644895s) === CONT TestNetworkPlugins/group/kindnet === RUN TestNetworkPlugins/group/kindnet/Start net_test.go:83: (dbg) Run: out/minikube-linux-amd64 start -p kindnet-20210507224017-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker --container-runtime=containerd E0507 22:40:18.629725 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory E0507 22:40:18.635217 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory E0507 22:40:18.645446 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory E0507 22:40:18.665610 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory E0507 22:40:18.705956 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory E0507 22:40:18.786084 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory E0507 22:40:18.946829 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory E0507 22:40:19.267563 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory E0507 22:40:19.907783 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory === CONT TestNetworkPlugins/group/custom-weave/NetCatPod net_test.go:127: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 8.005958535s === CONT TestNetworkPlugins/group/custom-weave net_test.go:135: skipping remaining tests for weave, as results can be unpredictable helpers_test.go:171: Cleaning up "custom-weave-20210507223739-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p custom-weave-20210507223739-391940 E0507 22:40:21.188702 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory E0507 22:40:23.748977 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p custom-weave-20210507223739-391940: (3.428843266s) === CONT TestNetworkPlugins/group/bridge === RUN TestNetworkPlugins/group/bridge/Start net_test.go:83: (dbg) Run: out/minikube-linux-amd64 start -p bridge-20210507224024-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker --container-runtime=containerd E0507 22:40:28.869646 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory === CONT TestNetworkPlugins/group/enable-default-cni/Start net_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20210507223814-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker --container-runtime=containerd: (2m16.158432203s) === RUN TestNetworkPlugins/group/enable-default-cni/KubeletFlags net_test.go:99: (dbg) Run: out/minikube-linux-amd64 ssh -p enable-default-cni-20210507223814-391940 "pgrep -a kubelet" === RUN TestNetworkPlugins/group/enable-default-cni/NetCatPod net_test.go:113: (dbg) Run: kubectl --context enable-default-cni-20210507223814-391940 replace --force -f testdata/netcat-deployment.yaml net_test.go:127: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ... helpers_test.go:335: "netcat-66fbc655d5-k5kh6" [16b980be-8de4-4276-bb02-9b3c4da10396] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils]) E0507 22:40:39.110528 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory helpers_test.go:335: "netcat-66fbc655d5-k5kh6" [16b980be-8de4-4276-bb02-9b3c4da10396] Running net_test.go:127: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 18.005828957s === RUN TestNetworkPlugins/group/enable-default-cni/DNS net_test.go:144: (dbg) Run: kubectl --context enable-default-cni-20210507223814-391940 exec deployment/netcat -- nslookup kubernetes.default === RUN TestNetworkPlugins/group/enable-default-cni/Localhost net_test.go:163: (dbg) Run: kubectl --context enable-default-cni-20210507223814-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080" === RUN TestNetworkPlugins/group/enable-default-cni/HairPin net_test.go:176: (dbg) Run: kubectl --context enable-default-cni-20210507223814-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080" === CONT TestNetworkPlugins/group/enable-default-cni net_test.go:192: "enable-default-cni" test finished in 20m15.662441778s, failed=false helpers_test.go:171: Cleaning up "enable-default-cni-20210507223814-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p enable-default-cni-20210507223814-391940 helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p enable-default-cni-20210507223814-391940: (3.102776844s) === CONT TestNetworkPlugins/group/kubenet === RUN TestNetworkPlugins/group/kubenet/Start net_test.go:83: (dbg) Run: out/minikube-linux-amd64 start -p kubenet-20210507224052-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker --container-runtime=containerd E0507 22:40:59.590886 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory E0507 22:41:40.551646 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory E0507 22:41:52.729649 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory E0507 22:41:59.411251 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory E0507 22:42:09.423314 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory E0507 22:42:15.754789 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory E0507 22:42:15.760089 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory E0507 22:42:15.770281 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory E0507 22:42:15.790588 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory E0507 22:42:15.830849 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory E0507 22:42:15.911176 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory E0507 22:42:16.071610 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory E0507 22:42:16.392294 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory E0507 22:42:17.032706 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory E0507 22:42:18.313259 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory === CONT TestNetworkPlugins/group/kindnet/Start net_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20210507224017-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker --container-runtime=containerd: (2m2.602931853s) === RUN TestNetworkPlugins/group/kindnet/ControllerPod net_test.go:91: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ... helpers_test.go:335: "kindnet-q67jp" [fa4108a6-8fc0-4ba5-ba81-ea32d753a85a] Running E0507 22:42:20.874384 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory net_test.go:91: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.012414354s === RUN TestNetworkPlugins/group/kindnet/KubeletFlags net_test.go:99: (dbg) Run: out/minikube-linux-amd64 ssh -p kindnet-20210507224017-391940 "pgrep -a kubelet" === RUN TestNetworkPlugins/group/kindnet/NetCatPod net_test.go:113: (dbg) Run: kubectl --context kindnet-20210507224017-391940 replace --force -f testdata/netcat-deployment.yaml net_test.go:127: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ... helpers_test.go:335: "netcat-66fbc655d5-mzscd" [1142810f-fc34-41de-9571-e809f058e5dc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils]) E0507 22:42:25.994891 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory helpers_test.go:335: "netcat-66fbc655d5-mzscd" [1142810f-fc34-41de-9571-e809f058e5dc] Running net_test.go:127: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004899415s === RUN TestNetworkPlugins/group/kindnet/DNS net_test.go:144: (dbg) Run: kubectl --context kindnet-20210507224017-391940 exec deployment/netcat -- nslookup kubernetes.default === RUN TestNetworkPlugins/group/kindnet/Localhost net_test.go:163: (dbg) Run: kubectl --context kindnet-20210507224017-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080" === RUN TestNetworkPlugins/group/kindnet/HairPin net_test.go:176: (dbg) Run: kubectl --context kindnet-20210507224017-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080" === CONT TestNetworkPlugins/group/kindnet net_test.go:192: "kindnet" test finished in 22m0.674974638s, failed=false helpers_test.go:171: Cleaning up "kindnet-20210507224017-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p kindnet-20210507224017-391940 E0507 22:42:36.235423 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory E0507 22:42:37.108954 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p kindnet-20210507224017-391940: (3.179614845s) --- PASS: TestStartStop (1323.86s) --- PASS: TestStartStop/group (0.00s) --- PASS: TestStartStop/group/no-preload (191.37s) --- PASS: TestStartStop/group/no-preload/serial (190.89s) --- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.58s) --- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.49s) --- PASS: TestStartStop/group/no-preload/serial/Stop (20.66s) --- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s) --- PASS: TestStartStop/group/no-preload/serial/SecondStart (70.07s) --- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s) --- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.01s) --- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s) --- PASS: TestStartStop/group/no-preload/serial/Pause (2.48s) --- SKIP: TestStartStop/group/disable-driver-mounts (0.57s) --- PASS: TestStartStop/group/old-k8s-version (300.91s) --- PASS: TestStartStop/group/old-k8s-version/serial (300.34s) --- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (132.55s) --- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.59s) --- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.88s) --- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s) --- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (117.98s) --- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s) --- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.03s) --- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.35s) --- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.60s) --- PASS: TestStartStop/group/newest-cni (141.39s) --- PASS: TestStartStop/group/newest-cni/serial (140.80s) --- PASS: TestStartStop/group/newest-cni/serial/FirstStart (65.87s) --- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s) --- PASS: TestStartStop/group/newest-cni/serial/Stop (1.36s) --- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s) --- PASS: TestStartStop/group/newest-cni/serial/SecondStart (67.80s) --- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s) --- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s) --- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s) --- PASS: TestStartStop/group/newest-cni/serial/Pause (2.25s) --- PASS: TestStartStop/group/embed-certs (292.43s) --- PASS: TestStartStop/group/embed-certs/serial (291.88s) --- PASS: TestStartStop/group/embed-certs/serial/FirstStart (133.81s) --- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.39s) --- PASS: TestStartStop/group/embed-certs/serial/Stop (20.94s) --- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s) --- PASS: TestStartStop/group/embed-certs/serial/SecondStart (110.47s) --- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.72s) --- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.31s) --- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s) --- PASS: TestStartStop/group/embed-certs/serial/Pause (2.55s) --- PASS: TestStartStop/group/default-k8s-different-port (312.96s) --- PASS: TestStartStop/group/default-k8s-different-port/serial (312.39s) --- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (146.99s) --- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.54s) --- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (25.25s) --- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.22s) --- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (114.27s) --- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.01s) --- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.01s) --- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.30s) --- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (2.56s) E0507 22:42:40.847864 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory E0507 22:42:56.716042 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory E0507 22:43:02.472583 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory === CONT TestNetworkPlugins/group/bridge/Start net_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20210507224024-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker --container-runtime=containerd: (2m43.090043803s) === RUN TestNetworkPlugins/group/bridge/KubeletFlags net_test.go:99: (dbg) Run: out/minikube-linux-amd64 ssh -p bridge-20210507224024-391940 "pgrep -a kubelet" === RUN TestNetworkPlugins/group/bridge/NetCatPod net_test.go:113: (dbg) Run: kubectl --context bridge-20210507224024-391940 replace --force -f testdata/netcat-deployment.yaml net_test.go:127: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ... helpers_test.go:335: "netcat-66fbc655d5-99v59" [28914f99-60e6-4041-b4f6-790f314b7988] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils]) helpers_test.go:335: "netcat-66fbc655d5-99v59" [28914f99-60e6-4041-b4f6-790f314b7988] Running net_test.go:127: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.006212618s === RUN TestNetworkPlugins/group/bridge/DNS net_test.go:144: (dbg) Run: kubectl --context bridge-20210507224024-391940 exec deployment/netcat -- nslookup kubernetes.default === RUN TestNetworkPlugins/group/bridge/Localhost net_test.go:163: (dbg) Run: kubectl --context bridge-20210507224024-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080" === RUN TestNetworkPlugins/group/bridge/HairPin net_test.go:176: (dbg) Run: kubectl --context bridge-20210507224024-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080" === CONT TestNetworkPlugins/group/bridge net_test.go:192: "bridge" test finished in 22m42.061924559s, failed=false helpers_test.go:171: Cleaning up "bridge-20210507224024-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p bridge-20210507224024-391940 E0507 22:43:17.777301 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p bridge-20210507224024-391940: (2.930410727s) E0507 22:43:37.677140 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory === CONT TestNetworkPlugins/group/false/Start net_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20210507223341-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker --container-runtime=containerd: exit status 80 (10m37.42338438s) -- stdout -- * [false-20210507223341-391940] minikube v1.20.0 on Debian 9.13 (kvm/amd64) - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig - MINIKUBE_BIN=out/minikube-linux-amd64 - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube - MINIKUBE_LOCATION=master * Using the docker driver based on user configuration - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities * Starting control plane node false-20210507223341-391940 in cluster false-20210507223341-391940 * Pulling base image ... * Creating docker container (CPUs=2, Memory=2048MB) ... * Preparing Kubernetes v1.20.2 on containerd 1.4.4 ... - Generating certificates and keys ... - Booting up control plane ... - Configuring RBAC rules ... * Verifying Kubernetes components... - Using image gcr.io/k8s-minikube/storage-provisioner:v5 * Enabled addons: storage-provisioner, default-storageclass -- /stdout -- ** stderr ** I0507 22:33:41.671845 634245 out.go:291] Setting OutFile to fd 1 ... I0507 22:33:41.672046 634245 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:33:41.672056 634245 out.go:304] Setting ErrFile to fd 2... I0507 22:33:41.672061 634245 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:33:41.672166 634245 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/bin I0507 22:33:41.672433 634245 out.go:298] Setting JSON to false I0507 22:33:41.711668 634245 start.go:108] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":11589,"bootTime":1620415232,"procs":319,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"} I0507 22:33:41.711780 634245 start.go:118] virtualization: kvm guest I0507 22:33:41.714469 634245 out.go:170] * [false-20210507223341-391940] minikube v1.20.0 on Debian 9.13 (kvm/amd64) I0507 22:33:41.716167 634245 out.go:170] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig I0507 22:33:41.717699 634245 out.go:170] - MINIKUBE_BIN=out/minikube-linux-amd64 I0507 22:33:41.719186 634245 out.go:170] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube I0507 22:33:41.720587 634245 out.go:170] - MINIKUBE_LOCATION=master I0507 22:33:41.721258 634245 driver.go:322] Setting default libvirt URI to qemu:///system I0507 22:33:41.768773 634245 docker.go:119] docker version: linux-19.03.15 I0507 22:33:41.768875 634245 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0507 22:33:41.848661 634245 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:70 SystemTime:2021-05-07 22:33:41.804081662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0507 22:33:41.848754 634245 docker.go:225] overlay module found I0507 22:33:41.850975 634245 out.go:170] * Using the docker driver based on user configuration I0507 22:33:41.851009 634245 start.go:276] selected driver: docker I0507 22:33:41.851014 634245 start.go:718] validating driver "docker" against I0507 22:33:41.851041 634245 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} W0507 22:33:41.851085 634245 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0507 22:33:41.851095 634245 out.go:424] no arguments passed for "! Your cgroup does not allow setting memory.\n" - returning raw string W0507 22:33:41.851110 634245 out.go:235] ! Your cgroup does not allow setting memory. ! Your cgroup does not allow setting memory. W0507 22:33:41.851118 634245 out.go:424] no arguments passed for " - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities\n" - returning raw string I0507 22:33:41.852536 634245 out.go:170] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0507 22:33:41.853360 634245 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0507 22:33:41.931700 634245 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:70 SystemTime:2021-05-07 22:33:41.888267241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0507 22:33:41.931825 634245 start_flags.go:259] no existing cluster config was found, will generate one from the flags I0507 22:33:41.932032 634245 start_flags.go:733] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0507 22:33:41.932082 634245 cni.go:93] Creating CNI manager for "false" I0507 22:33:41.932094 634245 start_flags.go:273] config: {Name:false-20210507223341-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:false-20210507223341-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0507 22:33:41.934456 634245 out.go:170] * Starting control plane node false-20210507223341-391940 in cluster false-20210507223341-391940 I0507 22:33:41.934501 634245 cache.go:111] Beginning downloading kic base image for docker with containerd W0507 22:33:41.934512 634245 out.go:424] no arguments passed for "* Pulling base image ...\n" - returning raw string W0507 22:33:41.934540 634245 out.go:424] no arguments passed for "* Pulling base image ...\n" - returning raw string I0507 22:33:41.936106 634245 out.go:170] * Pulling base image ... I0507 22:33:41.936144 634245 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd I0507 22:33:41.936172 634245 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 I0507 22:33:41.936183 634245 cache.go:54] Caching tarball of preloaded images I0507 22:33:41.936194 634245 preload.go:132] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download I0507 22:33:41.936201 634245 cache.go:57] Finished verifying existence of preloaded tar for v1.20.2 on containerd I0507 22:33:41.936254 634245 image.go:116] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory I0507 22:33:41.936279 634245 image.go:119] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory, skipping pull I0507 22:33:41.936286 634245 cache.go:131] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in cache, skipping pull I0507 22:33:41.936286 634245 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/config.json ... I0507 22:33:41.936312 634245 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/config.json: {Name:mk23ccd7c8d362b864a360f03438469e7fe31500 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:33:41.936323 634245 image.go:130] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon I0507 22:33:42.013749 634245 image.go:134] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon, skipping pull I0507 22:33:42.013781 634245 cache.go:155] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in daemon, skipping pull I0507 22:33:42.013802 634245 cache.go:194] Successfully downloaded all kic artifacts I0507 22:33:42.013839 634245 start.go:313] acquiring machines lock for false-20210507223341-391940: {Name:mk7a6d8cf53705fef8003241594dc2d2b6aceaa5 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0507 22:33:42.013993 634245 start.go:317] acquired machines lock for "false-20210507223341-391940" in 129.802µs I0507 22:33:42.014032 634245 start.go:89] Provisioning new machine with config: &{Name:false-20210507223341-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:false-20210507223341-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} I0507 22:33:42.014139 634245 start.go:126] createHost starting for "" (driver="docker") I0507 22:33:42.017083 634245 out.go:197] * Creating docker container (CPUs=2, Memory=2048MB) ... I0507 22:33:42.017373 634245 start.go:160] libmachine.API.Create for "false-20210507223341-391940" (driver="docker") I0507 22:33:42.017412 634245 client.go:168] LocalClient.Create starting I0507 22:33:42.017494 634245 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem I0507 22:33:42.017533 634245 main.go:128] libmachine: Decoding PEM data... I0507 22:33:42.017560 634245 main.go:128] libmachine: Parsing certificate... I0507 22:33:42.017747 634245 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem I0507 22:33:42.017775 634245 main.go:128] libmachine: Decoding PEM data... I0507 22:33:42.017795 634245 main.go:128] libmachine: Parsing certificate... I0507 22:33:42.018231 634245 cli_runner.go:115] Run: docker network inspect false-20210507223341-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0507 22:33:42.060979 634245 cli_runner.go:162] docker network inspect false-20210507223341-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0507 22:33:42.061043 634245 network_create.go:249] running [docker network inspect false-20210507223341-391940] to gather additional debugging logs... I0507 22:33:42.061062 634245 cli_runner.go:115] Run: docker network inspect false-20210507223341-391940 W0507 22:33:42.098833 634245 cli_runner.go:162] docker network inspect false-20210507223341-391940 returned with exit code 1 I0507 22:33:42.098863 634245 network_create.go:252] error running [docker network inspect false-20210507223341-391940]: docker network inspect false-20210507223341-391940: exit status 1 stdout: [] stderr: Error: No such network: false-20210507223341-391940 I0507 22:33:42.098875 634245 network_create.go:254] output of [docker network inspect false-20210507223341-391940]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: false-20210507223341-391940 ** /stderr ** I0507 22:33:42.098926 634245 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0507 22:33:42.136375 634245 network.go:215] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b7a55e9e83b1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:be:99:f6:89}} I0507 22:33:42.137655 634245 network.go:215] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-d814ab98e4bf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:cf:75:be:bd}} I0507 22:33:42.138675 634245 network.go:263] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc000010dc0] misses:0} I0507 22:33:42.138709 634245 network.go:210] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0507 22:33:42.138734 634245 network_create.go:100] attempt to create docker network false-20210507223341-391940 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ... I0507 22:33:42.138776 634245 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20210507223341-391940 I0507 22:33:42.210504 634245 network_create.go:84] docker network false-20210507223341-391940 192.168.67.0/24 created I0507 22:33:42.210539 634245 kic.go:106] calculated static IP "192.168.67.2" for the "false-20210507223341-391940" container I0507 22:33:42.210589 634245 cli_runner.go:115] Run: docker ps -a --format {{.Names}} I0507 22:33:42.249117 634245 cli_runner.go:115] Run: docker volume create false-20210507223341-391940 --label name.minikube.sigs.k8s.io=false-20210507223341-391940 --label created_by.minikube.sigs.k8s.io=true I0507 22:33:42.287717 634245 oci.go:102] Successfully created a docker volume false-20210507223341-391940 I0507 22:33:42.287789 634245 cli_runner.go:115] Run: docker run --rm --name false-20210507223341-391940-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20210507223341-391940 --entrypoint /usr/bin/test -v false-20210507223341-391940:/var gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib I0507 22:33:43.054610 634245 oci.go:106] Successfully prepared a docker volume false-20210507223341-391940 W0507 22:33:43.054673 634245 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0507 22:33:43.054683 634245 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0507 22:33:43.054754 634245 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'" I0507 22:33:43.054752 634245 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd I0507 22:33:43.054822 634245 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 I0507 22:33:43.054834 634245 kic.go:179] Starting extracting preloaded images to volume ... I0507 22:33:43.054876 634245 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20210507223341-391940:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir I0507 22:33:43.139196 634245 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-20210507223341-391940 --name false-20210507223341-391940 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20210507223341-391940 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-20210507223341-391940 --network false-20210507223341-391940 --ip 192.168.67.2 --volume false-20210507223341-391940:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e I0507 22:33:43.680563 634245 cli_runner.go:115] Run: docker container inspect false-20210507223341-391940 --format={{.State.Running}} I0507 22:33:43.734390 634245 cli_runner.go:115] Run: docker container inspect false-20210507223341-391940 --format={{.State.Status}} I0507 22:33:43.788488 634245 cli_runner.go:115] Run: docker exec false-20210507223341-391940 stat /var/lib/dpkg/alternatives/iptables I0507 22:33:43.915111 634245 oci.go:278] the created container "false-20210507223341-391940" has a running status. I0507 22:33:43.915155 634245 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/false-20210507223341-391940/id_rsa... I0507 22:33:44.025939 634245 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/false-20210507223341-391940/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0507 22:33:44.438055 634245 cli_runner.go:115] Run: docker container inspect false-20210507223341-391940 --format={{.State.Status}} I0507 22:33:44.483762 634245 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0507 22:33:44.483785 634245 kic_runner.go:115] Args: [docker exec --privileged false-20210507223341-391940 chown docker:docker /home/docker/.ssh/authorized_keys] I0507 22:33:47.326874 634245 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20210507223341-391940:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir: (4.27195433s) I0507 22:33:47.326921 634245 kic.go:188] duration metric: took 4.272083 seconds to extract preloaded images to volume I0507 22:33:47.327008 634245 cli_runner.go:115] Run: docker container inspect false-20210507223341-391940 --format={{.State.Status}} I0507 22:33:47.368924 634245 machine.go:88] provisioning docker machine ... I0507 22:33:47.368961 634245 ubuntu.go:169] provisioning hostname "false-20210507223341-391940" I0507 22:33:47.369027 634245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210507223341-391940 I0507 22:33:47.406971 634245 main.go:128] libmachine: Using SSH client type: native I0507 22:33:47.407198 634245 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 33291 } I0507 22:33:47.407218 634245 main.go:128] libmachine: About to run SSH command: sudo hostname false-20210507223341-391940 && echo "false-20210507223341-391940" | sudo tee /etc/hostname I0507 22:33:47.535710 634245 main.go:128] libmachine: SSH cmd err, output: : false-20210507223341-391940 I0507 22:33:47.535796 634245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210507223341-391940 I0507 22:33:47.586481 634245 main.go:128] libmachine: Using SSH client type: native I0507 22:33:47.586672 634245 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 33291 } I0507 22:33:47.586696 634245 main.go:128] libmachine: About to run SSH command: if ! grep -xq '.*\sfalse-20210507223341-391940' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-20210507223341-391940/g' /etc/hosts; else echo '127.0.1.1 false-20210507223341-391940' | sudo tee -a /etc/hosts; fi fi I0507 22:33:47.703075 634245 main.go:128] libmachine: SSH cmd err, output: : I0507 22:33:47.703107 634245 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube} I0507 22:33:47.703155 634245 ubuntu.go:177] setting up certificates I0507 22:33:47.703165 634245 provision.go:83] configureAuth start I0507 22:33:47.703224 634245 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20210507223341-391940 I0507 22:33:47.748068 634245 provision.go:137] copyHostCerts I0507 22:33:47.748132 634245 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem, removing ... I0507 22:33:47.748148 634245 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem I0507 22:33:47.748207 634245 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem (1078 bytes) I0507 22:33:47.748309 634245 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem, removing ... I0507 22:33:47.748328 634245 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem I0507 22:33:47.748356 634245 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem (1123 bytes) I0507 22:33:47.748491 634245 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem, removing ... I0507 22:33:47.748505 634245 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem I0507 22:33:47.748531 634245 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem (1675 bytes) I0507 22:33:47.748587 634245 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem org=jenkins.false-20210507223341-391940 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube false-20210507223341-391940] I0507 22:33:48.003964 634245 provision.go:165] copyRemoteCerts I0507 22:33:48.004046 634245 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0507 22:33:48.004111 634245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210507223341-391940 I0507 22:33:48.050112 634245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33291 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/false-20210507223341-391940/id_rsa Username:docker} I0507 22:33:48.135189 634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes) I0507 22:33:48.154693 634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes) I0507 22:33:48.174607 634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0507 22:33:48.194288 634245 provision.go:86] duration metric: configureAuth took 491.105993ms I0507 22:33:48.194317 634245 ubuntu.go:193] setting minikube options for container-runtime I0507 22:33:48.194495 634245 machine.go:91] provisioned docker machine in 825.54959ms I0507 22:33:48.194509 634245 client.go:171] LocalClient.Create took 6.177087062s I0507 22:33:48.194539 634245 start.go:168] duration metric: libmachine.API.Create for "false-20210507223341-391940" took 6.177154104s I0507 22:33:48.194555 634245 start.go:267] post-start starting for "false-20210507223341-391940" (driver="docker") I0507 22:33:48.194561 634245 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0507 22:33:48.194620 634245 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0507 22:33:48.194669 634245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210507223341-391940 I0507 22:33:48.242303 634245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33291 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/false-20210507223341-391940/id_rsa Username:docker} I0507 22:33:48.326894 634245 ssh_runner.go:149] Run: cat /etc/os-release I0507 22:33:48.329970 634245 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0507 22:33:48.329997 634245 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0507 22:33:48.330011 634245 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0507 22:33:48.330019 634245 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0507 22:33:48.330034 634245 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/addons for local assets ... I0507 22:33:48.330082 634245 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/files for local assets ... I0507 22:33:48.330191 634245 start.go:270] post-start completed in 135.629287ms I0507 22:33:48.330503 634245 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20210507223341-391940 I0507 22:33:48.380856 634245 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/config.json ... I0507 22:33:48.381145 634245 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0507 22:33:48.381186 634245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210507223341-391940 I0507 22:33:48.426083 634245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33291 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/false-20210507223341-391940/id_rsa Username:docker} I0507 22:33:48.507808 634245 start.go:129] duration metric: createHost completed in 6.493653731s I0507 22:33:48.507837 634245 start.go:80] releasing machines lock for "false-20210507223341-391940", held for 6.493829378s I0507 22:33:48.507931 634245 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20210507223341-391940 I0507 22:33:48.554550 634245 ssh_runner.go:149] Run: systemctl --version I0507 22:33:48.554610 634245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210507223341-391940 I0507 22:33:48.554623 634245 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/ I0507 22:33:48.554686 634245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210507223341-391940 I0507 22:33:48.601708 634245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33291 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/false-20210507223341-391940/id_rsa Username:docker} I0507 22:33:48.603836 634245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33291 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/false-20210507223341-391940/id_rsa Username:docker} I0507 22:33:48.753079 634245 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio I0507 22:33:48.762387 634245 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker I0507 22:33:48.770850 634245 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket I0507 22:33:48.786581 634245 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service I0507 22:33:48.795123 634245 ssh_runner.go:149] Run: sudo systemctl disable docker.socket I0507 22:33:48.872198 634245 ssh_runner.go:149] Run: sudo systemctl mask docker.service I0507 22:33:48.939801 634245 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker I0507 22:33:48.950402 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0507 22:33:48.964840 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjIiCiAgICBzdGF0c19jb2xsZWN0X3BlcmlvZCA9IDEwCiAgICBzeXN0ZW1kX2Nncm91cCA9IGZhbHNlCiAgICBlbmFibGVfdGxzX3N0cmVhbWluZyA9IGZhbHNlCiAgICBtYXhfY29udGFpbmVyX2xvZ19saW5lX3NpemUgPSAxNjM4NAogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgbm9fcGl2b3QgPSB0cnVlCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW50aW1lLnYxLmxpbnV4IgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMubGludXhdCiAgICBzaGltID0gImNvbnRhaW5lcmQtc2hpbSIKICAgIHJ1bnRpbWUgPSAicnVuYyIKICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICBub19zaGltID0gZmFsc2UKICAgIHNoaW1fZGVidWcgPSBmYWxzZQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml" I0507 22:33:48.978583 634245 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0507 22:33:48.985597 634245 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255 stdout: stderr: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory I0507 22:33:48.985654 634245 ssh_runner.go:149] Run: sudo modprobe br_netfilter I0507 22:33:48.993180 634245 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0507 22:33:48.999299 634245 ssh_runner.go:149] Run: sudo systemctl daemon-reload I0507 22:33:49.061225 634245 ssh_runner.go:149] Run: sudo systemctl restart containerd I0507 22:33:49.132733 634245 start.go:368] Will wait 60s for socket path /run/containerd/containerd.sock I0507 22:33:49.132802 634245 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock I0507 22:33:49.138104 634245 start.go:393] Will wait 60s for crictl version I0507 22:33:49.138162 634245 ssh_runner.go:149] Run: sudo crictl version I0507 22:33:49.164855 634245 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1 stdout: stderr: time="2021-05-07T22:33:49Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet" I0507 22:34:00.214633 634245 ssh_runner.go:149] Run: sudo crictl version I0507 22:34:00.240982 634245 start.go:402] Version: 0.1.0 RuntimeName: containerd RuntimeVersion: 1.4.4 RuntimeApiVersion: v1alpha2 I0507 22:34:00.241043 634245 ssh_runner.go:149] Run: containerd --version I0507 22:34:00.264679 634245 out.go:170] * Preparing Kubernetes v1.20.2 on containerd 1.4.4 ... I0507 22:34:00.264748 634245 cli_runner.go:115] Run: docker network inspect false-20210507223341-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0507 22:34:00.301900 634245 ssh_runner.go:149] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts I0507 22:34:00.305103 634245 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0507 22:34:00.314145 634245 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/client.crt I0507 22:34:00.314263 634245 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/client.key I0507 22:34:00.314376 634245 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd I0507 22:34:00.314402 634245 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 I0507 22:34:00.314435 634245 ssh_runner.go:149] Run: sudo crictl images --output json I0507 22:34:00.336029 634245 containerd.go:571] all images are preloaded for containerd runtime. I0507 22:34:00.336049 634245 containerd.go:481] Images already preloaded, skipping extraction I0507 22:34:00.336091 634245 ssh_runner.go:149] Run: sudo crictl images --output json I0507 22:34:00.356941 634245 containerd.go:571] all images are preloaded for containerd runtime. I0507 22:34:00.356961 634245 cache_images.go:74] Images are preloaded, skipping loading I0507 22:34:00.356999 634245 ssh_runner.go:149] Run: sudo crictl info I0507 22:34:00.378057 634245 cni.go:93] Creating CNI manager for "false" I0507 22:34:00.378078 634245 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0507 22:34:00.378090 634245 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:false-20210507223341-391940 NodeName:false-20210507223341-391940 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0507 22:34:00.378222 634245 kubeadm.go:157] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.67.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /run/containerd/containerd.sock name: "false-20210507223341-391940" kubeletExtraArgs: node-ip: 192.168.67.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.67.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.20.2 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 I0507 22:34:00.378299 634245 kubeadm.go:901] kubelet [Unit] Wants=containerd.service [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=false-20210507223341-391940 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m [Install] config: {KubernetesVersion:v1.20.2 ClusterName:false-20210507223341-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} I0507 22:34:00.378342 634245 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2 I0507 22:34:00.384607 634245 binaries.go:44] Found k8s binaries, skipping transfer I0507 22:34:00.384661 634245 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0507 22:34:00.390683 634245 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (520 bytes) I0507 22:34:00.402157 634245 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0507 22:34:00.413347 634245 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1866 bytes) I0507 22:34:00.424404 634245 ssh_runner.go:149] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts I0507 22:34:00.426984 634245 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0507 22:34:00.435125 634245 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940 for IP: 192.168.67.2 I0507 22:34:00.435175 634245 certs.go:171] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.key I0507 22:34:00.435200 634245 certs.go:171] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.key I0507 22:34:00.435267 634245 certs.go:282] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/client.key I0507 22:34:00.435291 634245 certs.go:286] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.key.c7fa3a9e I0507 22:34:00.435306 634245 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1] I0507 22:34:00.683898 634245 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.crt.c7fa3a9e ... I0507 22:34:00.683927 634245 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.crt.c7fa3a9e: {Name:mk832f30aaa54d710c053960a14b9c9763ea8855 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:34:00.684135 634245 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.key.c7fa3a9e ... I0507 22:34:00.684155 634245 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.key.c7fa3a9e: {Name:mkc25e1e27c1f81f26ea4162f9315965b8db9e7c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:34:00.684269 634245 certs.go:297] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.crt I0507 22:34:00.684332 634245 certs.go:301] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.key I0507 22:34:00.684379 634245 certs.go:286] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/proxy-client.key I0507 22:34:00.684388 634245 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/proxy-client.crt with IP's: [] I0507 22:34:00.879310 634245 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/proxy-client.crt ... I0507 22:34:00.879344 634245 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/proxy-client.crt: {Name:mk26c26b342b31b8cbe560fb625a13c27bc1f21c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:34:00.879542 634245 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/proxy-client.key ... I0507 22:34:00.879556 634245 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/proxy-client.key: {Name:mk7c148e46579cba18beedf400ac99dc15dc98b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:34:00.879749 634245 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940.pem (1338 bytes) W0507 22:34:00.879790 634245 certs.go:357] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940_empty.pem, impossibly tiny 0 bytes I0507 22:34:00.879802 634245 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem (1679 bytes) I0507 22:34:00.879830 634245 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem (1078 bytes) I0507 22:34:00.879856 634245 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem (1123 bytes) I0507 22:34:00.879878 634245 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/key.pem (1675 bytes) I0507 22:34:00.880826 634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0507 22:34:00.898346 634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0507 22:34:00.946964 634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0507 22:34:00.963196 634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0507 22:34:00.978632 634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0507 22:34:00.994038 634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0507 22:34:01.009068 634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0507 22:34:01.024278 634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0507 22:34:01.039382 634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940.pem --> /usr/share/ca-certificates/391940.pem (1338 bytes) I0507 22:34:01.054804 634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0507 22:34:01.070027 634245 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0507 22:34:01.081077 634245 ssh_runner.go:149] Run: openssl version I0507 22:34:01.085457 634245 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391940.pem && ln -fs /usr/share/ca-certificates/391940.pem /etc/ssl/certs/391940.pem" I0507 22:34:01.091954 634245 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/391940.pem I0507 22:34:01.094696 634245 certs.go:402] hashing: -rw-r--r-- 1 root root 1338 May 7 21:57 /usr/share/ca-certificates/391940.pem I0507 22:34:01.094741 634245 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391940.pem I0507 22:34:01.099139 634245 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391940.pem /etc/ssl/certs/51391683.0" I0507 22:34:01.105589 634245 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0507 22:34:01.112220 634245 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0507 22:34:01.114972 634245 certs.go:402] hashing: -rw-r--r-- 1 root root 1111 May 7 21:50 /usr/share/ca-certificates/minikubeCA.pem I0507 22:34:01.115009 634245 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0507 22:34:01.119349 634245 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0507 22:34:01.125976 634245 kubeadm.go:381] StartCluster: {Name:false-20210507223341-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:false-20210507223341-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0507 22:34:01.126045 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]} I0507 22:34:01.126083 634245 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system" I0507 22:34:01.148376 634245 cri.go:76] found id: "" I0507 22:34:01.148428 634245 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0507 22:34:01.154747 634245 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0507 22:34:01.161067 634245 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver I0507 22:34:01.161115 634245 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0507 22:34:01.167336 634245 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0507 22:34:01.167378 634245 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" W0507 22:34:25.011836 634245 out.go:424] no arguments passed for " - Generating certificates and keys ..." - returning raw string W0507 22:34:25.011876 634245 out.go:424] no arguments passed for " - Generating certificates and keys ..." - returning raw string I0507 22:34:25.013486 634245 out.go:197] - Generating certificates and keys ... W0507 22:34:25.015044 634245 out.go:424] no arguments passed for " - Booting up control plane ..." - returning raw string W0507 22:34:25.015075 634245 out.go:424] no arguments passed for " - Booting up control plane ..." - returning raw string I0507 22:34:25.016674 634245 out.go:197] - Booting up control plane ... W0507 22:34:25.017665 634245 out.go:424] no arguments passed for " - Configuring RBAC rules ..." - returning raw string W0507 22:34:25.017691 634245 out.go:424] no arguments passed for " - Configuring RBAC rules ..." - returning raw string I0507 22:34:25.019150 634245 out.go:197] - Configuring RBAC rules ... I0507 22:34:25.021525 634245 cni.go:93] Creating CNI manager for "false" I0507 22:34:25.021571 634245 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0507 22:34:25.021622 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:25.021648 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=c31bd57f93d45726e4bd30607374f8c720e70e95 minikube.k8s.io/name=false-20210507223341-391940 minikube.k8s.io/updated_at=2021_05_07T22_34_25_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:25.219333 634245 ops.go:34] apiserver oom_adj: -16 I0507 22:34:25.219382 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:25.782442 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:26.282719 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:26.782854 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:27.282542 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:27.782784 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:28.281876 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:28.782701 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:29.282090 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:29.782892 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:30.282565 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:30.782810 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:31.282916 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:31.782697 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:32.281924 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:32.782535 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:33.282484 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:33.782576 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:34.281868 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:34.782692 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:35.282380 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:35.781966 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:36.282819 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:36.782131 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:37.282111 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:37.782626 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:38.282804 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:38.782257 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:39.282289 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:39.782288 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:40.281934 634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:34:40.562366 634245 kubeadm.go:977] duration metric: took 15.540789582s to wait for elevateKubeSystemPrivileges. I0507 22:34:40.562392 634245 kubeadm.go:383] StartCluster complete in 39.436423341s I0507 22:34:40.562415 634245 settings.go:142] acquiring lock: {Name:mkbc12d45ea1a96167acb2e3885011008220fc1e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:34:40.562517 634245 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig I0507 22:34:40.564316 634245 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig: {Name:mk53c460e0a047a0806c95f27e36717b9bf9f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:34:41.081731 634245 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "false-20210507223341-391940" rescaled to 1 I0507 22:34:41.081788 634245 start.go:201] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} W0507 22:34:41.081819 634245 out.go:424] no arguments passed for "* Verifying Kubernetes components...\n" - returning raw string W0507 22:34:41.081835 634245 out.go:424] no arguments passed for "* Verifying Kubernetes components...\n" - returning raw string I0507 22:34:41.083643 634245 out.go:170] * Verifying Kubernetes components... I0507 22:34:41.081896 634245 addons.go:328] enableAddons start: toEnable=map[], additional=[] I0507 22:34:41.083711 634245 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0507 22:34:41.083720 634245 addons.go:55] Setting storage-provisioner=true in profile "false-20210507223341-391940" I0507 22:34:41.083742 634245 addons.go:131] Setting addon storage-provisioner=true in "false-20210507223341-391940" W0507 22:34:41.083749 634245 addons.go:140] addon storage-provisioner should already be in state true I0507 22:34:41.083775 634245 host.go:66] Checking if "false-20210507223341-391940" exists ... I0507 22:34:41.082063 634245 cache.go:108] acquiring lock: {Name:mk66f3ed174a0fda2e3a4fd9a235ceef9553bc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0507 22:34:41.083883 634245 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 exists I0507 22:34:41.083907 634245 cache.go:97] cache image "minikube-local-cache-test:functional-20210507215728-391940" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940" took 1.854335ms I0507 22:34:41.083925 634245 cache.go:81] save to tar file minikube-local-cache-test:functional-20210507215728-391940 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 succeeded I0507 22:34:41.083936 634245 cache.go:88] Successfully saved all images to host disk. I0507 22:34:41.084153 634245 addons.go:55] Setting default-storageclass=true in profile "false-20210507223341-391940" I0507 22:34:41.084176 634245 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "false-20210507223341-391940" I0507 22:34:41.084357 634245 cli_runner.go:115] Run: docker container inspect false-20210507223341-391940 --format={{.State.Status}} I0507 22:34:41.084378 634245 cli_runner.go:115] Run: docker container inspect false-20210507223341-391940 --format={{.State.Status}} I0507 22:34:41.084465 634245 cli_runner.go:115] Run: docker container inspect false-20210507223341-391940 --format={{.State.Status}} I0507 22:34:41.103138 634245 node_ready.go:35] waiting up to 5m0s for node "false-20210507223341-391940" to be "Ready" ... I0507 22:34:41.107823 634245 node_ready.go:49] node "false-20210507223341-391940" has status "Ready":"True" I0507 22:34:41.107847 634245 node_ready.go:38] duration metric: took 4.683133ms waiting for node "false-20210507223341-391940" to be "Ready" ... I0507 22:34:41.107858 634245 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0507 22:34:41.119754 634245 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace to be "Ready" ... I0507 22:34:41.141845 634245 out.go:170] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0507 22:34:41.141970 634245 addons.go:261] installing /etc/kubernetes/addons/storage-provisioner.yaml I0507 22:34:41.141982 634245 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0507 22:34:41.142040 634245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210507223341-391940 I0507 22:34:41.143864 634245 ssh_runner.go:149] Run: sudo crictl images --output json I0507 22:34:41.143908 634245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210507223341-391940 I0507 22:34:41.151472 634245 addons.go:131] Setting addon default-storageclass=true in "false-20210507223341-391940" W0507 22:34:41.151497 634245 addons.go:140] addon default-storageclass should already be in state true I0507 22:34:41.151538 634245 host.go:66] Checking if "false-20210507223341-391940" exists ... I0507 22:34:41.152064 634245 cli_runner.go:115] Run: docker container inspect false-20210507223341-391940 --format={{.State.Status}} I0507 22:34:41.192759 634245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33291 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/false-20210507223341-391940/id_rsa Username:docker} I0507 22:34:41.193663 634245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33291 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/false-20210507223341-391940/id_rsa Username:docker} I0507 22:34:41.201905 634245 addons.go:261] installing /etc/kubernetes/addons/storageclass.yaml I0507 22:34:41.201933 634245 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0507 22:34:41.202002 634245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210507223341-391940 I0507 22:34:41.248456 634245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33291 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/false-20210507223341-391940/id_rsa Username:docker} I0507 22:34:41.289244 634245 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0507 22:34:41.306206 634245 containerd.go:567] couldn't find preloaded image for "docker.io/minikube-local-cache-test:functional-20210507215728-391940". assuming images are not preloaded. I0507 22:34:41.306232 634245 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210507215728-391940] I0507 22:34:41.306296 634245 image.go:320] retrieving image: minikube-local-cache-test:functional-20210507215728-391940 I0507 22:34:41.306340 634245 image.go:326] checking repository: index.docker.io/library/minikube-local-cache-test I0507 22:34:41.343566 634245 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml W0507 22:34:41.545836 634245 image.go:333] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details) I0507 22:34:41.545883 634245 image.go:334] short name: minikube-local-cache-test:functional-20210507215728-391940 I0507 22:34:41.546940 634245 image.go:362] daemon lookup for minikube-local-cache-test:functional-20210507215728-391940: Error response from daemon: reference does not exist W0507 22:34:41.700674 634245 image.go:372] authn lookup for minikube-local-cache-test:functional-20210507215728-391940 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:34:41.770255 634245 out.go:170] * Enabled addons: storage-provisioner, default-storageclass I0507 22:34:41.770288 634245 addons.go:330] enableAddons completed in 688.400381ms I0507 22:34:41.847381 634245 image.go:376] remote lookup for minikube-local-cache-test:functional-20210507215728-391940: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:34:41.847431 634245 image.go:98] error retrieve Image minikube-local-cache-test:functional-20210507215728-391940 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:34:41.847467 634245 cache_images.go:106] "minikube-local-cache-test:functional-20210507215728-391940" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210507215728-391940 I0507 22:34:41.847488 634245 cache_images.go:271] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:34:41.847633 634245 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:34:41.851020 634245 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940': No such file or directory I0507 22:34:41.851054 634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 (5120 bytes) I0507 22:34:41.868164 634245 containerd.go:267] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:34:41.868205 634245 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:34:41.985873 634245 cache_images.go:293] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 from cache I0507 22:34:41.985907 634245 cache_images.go:113] Successfully loaded all cached images I0507 22:34:41.985916 634245 cache_images.go:82] LoadImages completed in 679.673608ms I0507 22:34:41.985928 634245 cache_images.go:252] succeeded pushing to: false-20210507223341-391940 I0507 22:34:41.985933 634245 cache_images.go:253] failed pushing to: I0507 22:34:43.135548 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:34:45.135601 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:34:47.135694 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:34:49.135952 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:34:51.635600 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:34:54.136548 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:34:56.635856 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:34:59.135635 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:01.135782 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:03.634948 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:06.135075 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:08.135623 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:10.135862 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:12.634936 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:14.635724 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:17.135930 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:19.136069 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:21.634975 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:23.635590 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:25.636221 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:27.636268 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:30.135517 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:35.748122 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:38.136063 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:40.635542 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:42.635670 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:45.163965 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:47.635873 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:49.636001 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:52.135134 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:54.139210 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:56.636258 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:35:59.135955 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:01.635635 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:03.636998 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:06.136483 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:08.636177 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:11.135760 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:13.635617 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:16.135532 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:18.635533 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:21.135723 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:23.636260 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:26.135780 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:28.636008 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:31.135578 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:33.292473 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:35.635972 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:38.135299 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:40.136525 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:42.635366 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:45.135417 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:47.636272 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:50.135387 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:52.137297 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:54.636218 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:57.134756 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:36:59.135357 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:01.636149 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:04.135471 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:06.636279 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:09.135287 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:11.136159 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:13.635076 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:15.635169 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:17.635332 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:19.635752 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:22.136027 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:24.635252 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:27.135233 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:29.135626 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:31.636500 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:34.135445 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:36.136008 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:38.636044 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:41.136299 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:43.137705 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:45.659738 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:48.135824 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:52.779300 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:55.596476 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:57.634881 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:37:59.635199 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:38:02.135474 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:38:04.136331 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:38:06.636167 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:38:09.136295 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:38:11.639035 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:38:14.136581 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:38:16.664836 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:38:19.636274 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:38:21.636787 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:38:23.636920 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:38:26.136591 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:38:28.137163 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:38:30.636213 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:38:32.636342 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:38:35.136211 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:38:37.636287 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:38:40.135484 634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False" I0507 22:38:41.139921 634245 pod_ready.go:81] duration metric: took 4m0.020130729s waiting for pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace to be "Ready" ... E0507 22:38:41.139947 634245 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition I0507 22:38:41.139956 634245 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-tzwx6" in "kube-system" namespace to be "Ready" ... I0507 22:38:41.141858 634245 pod_ready.go:97] error getting pod "coredns-74ff55c5b-tzwx6" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-tzwx6" not found I0507 22:38:41.141879 634245 pod_ready.go:81] duration metric: took 1.916082ms waiting for pod "coredns-74ff55c5b-tzwx6" in "kube-system" namespace to be "Ready" ... E0507 22:38:41.141891 634245 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-74ff55c5b-tzwx6" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-tzwx6" not found I0507 22:38:41.141897 634245 pod_ready.go:78] waiting up to 5m0s for pod "etcd-false-20210507223341-391940" in "kube-system" namespace to be "Ready" ... I0507 22:38:41.145989 634245 pod_ready.go:92] pod "etcd-false-20210507223341-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:38:41.146009 634245 pod_ready.go:81] duration metric: took 4.104794ms waiting for pod "etcd-false-20210507223341-391940" in "kube-system" namespace to be "Ready" ... I0507 22:38:41.146022 634245 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-false-20210507223341-391940" in "kube-system" namespace to be "Ready" ... I0507 22:38:41.150423 634245 pod_ready.go:92] pod "kube-apiserver-false-20210507223341-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:38:41.150448 634245 pod_ready.go:81] duration metric: took 4.417403ms waiting for pod "kube-apiserver-false-20210507223341-391940" in "kube-system" namespace to be "Ready" ... I0507 22:38:41.150460 634245 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-false-20210507223341-391940" in "kube-system" namespace to be "Ready" ... I0507 22:38:41.333279 634245 pod_ready.go:92] pod "kube-controller-manager-false-20210507223341-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:38:41.333302 634245 pod_ready.go:81] duration metric: took 182.833286ms waiting for pod "kube-controller-manager-false-20210507223341-391940" in "kube-system" namespace to be "Ready" ... I0507 22:38:41.333317 634245 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-bmhxt" in "kube-system" namespace to be "Ready" ... I0507 22:38:41.733691 634245 pod_ready.go:92] pod "kube-proxy-bmhxt" in "kube-system" namespace has status "Ready":"True" I0507 22:38:41.733713 634245 pod_ready.go:81] duration metric: took 400.386903ms waiting for pod "kube-proxy-bmhxt" in "kube-system" namespace to be "Ready" ... I0507 22:38:41.733726 634245 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-false-20210507223341-391940" in "kube-system" namespace to be "Ready" ... I0507 22:38:42.134225 634245 pod_ready.go:92] pod "kube-scheduler-false-20210507223341-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:38:42.134254 634245 pod_ready.go:81] duration metric: took 400.518446ms waiting for pod "kube-scheduler-false-20210507223341-391940" in "kube-system" namespace to be "Ready" ... I0507 22:38:42.134266 634245 pod_ready.go:38] duration metric: took 4m1.026392157s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0507 22:38:42.134291 634245 api_server.go:50] waiting for apiserver process to appear ... I0507 22:38:42.134322 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]} I0507 22:38:42.134385 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0507 22:38:42.162111 634245 cri.go:76] found id: "a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a" I0507 22:38:42.162141 634245 cri.go:76] found id: "" I0507 22:38:42.162149 634245 logs.go:270] 1 containers: [a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a] I0507 22:38:42.162200 634245 ssh_runner.go:149] Run: which crictl I0507 22:38:42.165473 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]} I0507 22:38:42.165534 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd I0507 22:38:42.189732 634245 cri.go:76] found id: "65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb" I0507 22:38:42.189755 634245 cri.go:76] found id: "" I0507 22:38:42.189763 634245 logs.go:270] 1 containers: [65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb] I0507 22:38:42.189812 634245 ssh_runner.go:149] Run: which crictl I0507 22:38:42.192824 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]} I0507 22:38:42.192882 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns I0507 22:38:42.213917 634245 cri.go:76] found id: "" I0507 22:38:42.213936 634245 logs.go:270] 0 containers: [] W0507 22:38:42.213943 634245 logs.go:272] No container was found matching "coredns" I0507 22:38:42.213949 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]} I0507 22:38:42.213990 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0507 22:38:42.240297 634245 cri.go:76] found id: "469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce" I0507 22:38:42.240323 634245 cri.go:76] found id: "" I0507 22:38:42.240332 634245 logs.go:270] 1 containers: [469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce] I0507 22:38:42.240385 634245 ssh_runner.go:149] Run: which crictl I0507 22:38:42.243950 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]} I0507 22:38:42.244124 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy I0507 22:38:42.270810 634245 cri.go:76] found id: "313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4" I0507 22:38:42.270833 634245 cri.go:76] found id: "" I0507 22:38:42.270840 634245 logs.go:270] 1 containers: [313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4] I0507 22:38:42.270902 634245 ssh_runner.go:149] Run: which crictl I0507 22:38:42.274293 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]} I0507 22:38:42.274357 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0507 22:38:42.301269 634245 cri.go:76] found id: "" I0507 22:38:42.301300 634245 logs.go:270] 0 containers: [] W0507 22:38:42.301310 634245 logs.go:272] No container was found matching "kubernetes-dashboard" I0507 22:38:42.301319 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]} I0507 22:38:42.301388 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0507 22:38:42.324104 634245 cri.go:76] found id: "d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9" I0507 22:38:42.324130 634245 cri.go:76] found id: "" I0507 22:38:42.324139 634245 logs.go:270] 1 containers: [d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9] I0507 22:38:42.324188 634245 ssh_runner.go:149] Run: which crictl I0507 22:38:42.326956 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]} I0507 22:38:42.327020 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0507 22:38:42.353276 634245 cri.go:76] found id: "16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984" I0507 22:38:42.353302 634245 cri.go:76] found id: "" I0507 22:38:42.353308 634245 logs.go:270] 1 containers: [16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984] I0507 22:38:42.353354 634245 ssh_runner.go:149] Run: which crictl I0507 22:38:42.356413 634245 logs.go:123] Gathering logs for containerd ... I0507 22:38:42.356435 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400" I0507 22:38:42.393204 634245 logs.go:123] Gathering logs for kubelet ... I0507 22:38:42.393229 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0507 22:38:42.469488 634245 logs.go:123] Gathering logs for dmesg ... I0507 22:38:42.469522 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0507 22:38:42.492577 634245 logs.go:123] Gathering logs for storage-provisioner [d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9] ... I0507 22:38:42.492613 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9" I0507 22:38:42.524556 634245 logs.go:123] Gathering logs for kube-scheduler [469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce] ... I0507 22:38:42.524590 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce" I0507 22:38:42.562198 634245 logs.go:123] Gathering logs for kube-proxy [313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4] ... I0507 22:38:42.562230 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4" I0507 22:38:42.612036 634245 logs.go:123] Gathering logs for kube-controller-manager [16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984] ... I0507 22:38:42.612076 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984" I0507 22:38:42.649080 634245 logs.go:123] Gathering logs for container status ... I0507 22:38:42.649116 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0507 22:38:42.677599 634245 logs.go:123] Gathering logs for describe nodes ... I0507 22:38:42.677638 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0507 22:38:42.811684 634245 logs.go:123] Gathering logs for kube-apiserver [a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a] ... I0507 22:38:42.811716 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a" I0507 22:38:42.866487 634245 logs.go:123] Gathering logs for etcd [65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb] ... I0507 22:38:42.866523 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb" I0507 22:38:45.406134 634245 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0507 22:38:45.426431 634245 api_server.go:70] duration metric: took 4m4.344609735s to wait for apiserver process to appear ... I0507 22:38:45.426453 634245 api_server.go:86] waiting for apiserver healthz status ... I0507 22:38:45.426478 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]} I0507 22:38:45.426533 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0507 22:38:45.454367 634245 cri.go:76] found id: "a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a" I0507 22:38:45.454394 634245 cri.go:76] found id: "" I0507 22:38:45.454403 634245 logs.go:270] 1 containers: [a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a] I0507 22:38:45.454454 634245 ssh_runner.go:149] Run: which crictl I0507 22:38:45.457620 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]} I0507 22:38:45.457679 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd I0507 22:38:45.479545 634245 cri.go:76] found id: "65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb" I0507 22:38:45.479568 634245 cri.go:76] found id: "" I0507 22:38:45.479576 634245 logs.go:270] 1 containers: [65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb] I0507 22:38:45.479624 634245 ssh_runner.go:149] Run: which crictl I0507 22:38:45.483012 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]} I0507 22:38:45.483073 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns I0507 22:38:45.506317 634245 cri.go:76] found id: "" I0507 22:38:45.506341 634245 logs.go:270] 0 containers: [] W0507 22:38:45.506349 634245 logs.go:272] No container was found matching "coredns" I0507 22:38:45.506357 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]} I0507 22:38:45.506414 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0507 22:38:45.527687 634245 cri.go:76] found id: "469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce" I0507 22:38:45.527707 634245 cri.go:76] found id: "" I0507 22:38:45.527714 634245 logs.go:270] 1 containers: [469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce] I0507 22:38:45.527761 634245 ssh_runner.go:149] Run: which crictl I0507 22:38:45.530519 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]} I0507 22:38:45.530571 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy I0507 22:38:45.553316 634245 cri.go:76] found id: "313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4" I0507 22:38:45.553340 634245 cri.go:76] found id: "" I0507 22:38:45.553347 634245 logs.go:270] 1 containers: [313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4] I0507 22:38:45.553387 634245 ssh_runner.go:149] Run: which crictl I0507 22:38:45.556316 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]} I0507 22:38:45.556371 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0507 22:38:45.583749 634245 cri.go:76] found id: "" I0507 22:38:45.583773 634245 logs.go:270] 0 containers: [] W0507 22:38:45.583780 634245 logs.go:272] No container was found matching "kubernetes-dashboard" I0507 22:38:45.583789 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]} I0507 22:38:45.583841 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0507 22:38:45.605479 634245 cri.go:76] found id: "d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9" I0507 22:38:45.605498 634245 cri.go:76] found id: "" I0507 22:38:45.605505 634245 logs.go:270] 1 containers: [d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9] I0507 22:38:45.605554 634245 ssh_runner.go:149] Run: which crictl I0507 22:38:45.608302 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]} I0507 22:38:45.608359 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0507 22:38:45.628727 634245 cri.go:76] found id: "16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984" I0507 22:38:45.628746 634245 cri.go:76] found id: "" I0507 22:38:45.628752 634245 logs.go:270] 1 containers: [16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984] I0507 22:38:45.628787 634245 ssh_runner.go:149] Run: which crictl I0507 22:38:45.631382 634245 logs.go:123] Gathering logs for kube-controller-manager [16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984] ... I0507 22:38:45.631406 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984" I0507 22:38:45.668533 634245 logs.go:123] Gathering logs for containerd ... I0507 22:38:45.668560 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400" I0507 22:38:45.711740 634245 logs.go:123] Gathering logs for container status ... I0507 22:38:45.711773 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0507 22:38:45.738122 634245 logs.go:123] Gathering logs for dmesg ... I0507 22:38:45.738153 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0507 22:38:45.764468 634245 logs.go:123] Gathering logs for describe nodes ... I0507 22:38:45.764498 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0507 22:38:45.865790 634245 logs.go:123] Gathering logs for kube-apiserver [a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a] ... I0507 22:38:45.865820 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a" I0507 22:38:45.925696 634245 logs.go:123] Gathering logs for kube-scheduler [469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce] ... I0507 22:38:45.925733 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce" I0507 22:38:45.969456 634245 logs.go:123] Gathering logs for kubelet ... I0507 22:38:45.969492 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0507 22:38:46.063943 634245 logs.go:123] Gathering logs for etcd [65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb] ... I0507 22:38:46.063988 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb" I0507 22:38:46.105268 634245 logs.go:123] Gathering logs for kube-proxy [313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4] ... I0507 22:38:46.105295 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4" I0507 22:38:46.131406 634245 logs.go:123] Gathering logs for storage-provisioner [d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9] ... I0507 22:38:46.131437 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9" I0507 22:38:48.659814 634245 api_server.go:223] Checking apiserver healthz at https://192.168.67.2:8443/healthz ... I0507 22:38:48.666075 634245 api_server.go:249] https://192.168.67.2:8443/healthz returned 200: ok I0507 22:38:48.666934 634245 api_server.go:139] control plane version: v1.20.2 I0507 22:38:48.666956 634245 api_server.go:129] duration metric: took 3.240497832s to wait for apiserver health ... I0507 22:38:48.666966 634245 system_pods.go:43] waiting for kube-system pods to appear ... I0507 22:38:48.666994 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]} I0507 22:38:48.667057 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0507 22:38:48.692526 634245 cri.go:76] found id: "a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a" I0507 22:38:48.692552 634245 cri.go:76] found id: "" I0507 22:38:48.692561 634245 logs.go:270] 1 containers: [a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a] I0507 22:38:48.692607 634245 ssh_runner.go:149] Run: which crictl I0507 22:38:48.695470 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]} I0507 22:38:48.695533 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd I0507 22:38:48.718740 634245 cri.go:76] found id: "65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb" I0507 22:38:48.718764 634245 cri.go:76] found id: "" I0507 22:38:48.718773 634245 logs.go:270] 1 containers: [65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb] I0507 22:38:48.718807 634245 ssh_runner.go:149] Run: which crictl I0507 22:38:48.721549 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]} I0507 22:38:48.721606 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns I0507 22:38:48.743298 634245 cri.go:76] found id: "" I0507 22:38:48.743321 634245 logs.go:270] 0 containers: [] W0507 22:38:48.743328 634245 logs.go:272] No container was found matching "coredns" I0507 22:38:48.743341 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]} I0507 22:38:48.743393 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0507 22:38:48.768180 634245 cri.go:76] found id: "469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce" I0507 22:38:48.768203 634245 cri.go:76] found id: "" I0507 22:38:48.768210 634245 logs.go:270] 1 containers: [469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce] I0507 22:38:48.768265 634245 ssh_runner.go:149] Run: which crictl I0507 22:38:48.771177 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]} I0507 22:38:48.771234 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy I0507 22:38:48.794995 634245 cri.go:76] found id: "313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4" I0507 22:38:48.795016 634245 cri.go:76] found id: "" I0507 22:38:48.795022 634245 logs.go:270] 1 containers: [313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4] I0507 22:38:48.795057 634245 ssh_runner.go:149] Run: which crictl I0507 22:38:48.797732 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]} I0507 22:38:48.797779 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0507 22:38:48.818179 634245 cri.go:76] found id: "" I0507 22:38:48.818195 634245 logs.go:270] 0 containers: [] W0507 22:38:48.818200 634245 logs.go:272] No container was found matching "kubernetes-dashboard" I0507 22:38:48.818207 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]} I0507 22:38:48.818255 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0507 22:38:48.839185 634245 cri.go:76] found id: "d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9" I0507 22:38:48.839210 634245 cri.go:76] found id: "" I0507 22:38:48.839217 634245 logs.go:270] 1 containers: [d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9] I0507 22:38:48.839262 634245 ssh_runner.go:149] Run: which crictl I0507 22:38:48.841966 634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]} I0507 22:38:48.842025 634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0507 22:38:48.863178 634245 cri.go:76] found id: "16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984" I0507 22:38:48.863197 634245 cri.go:76] found id: "" I0507 22:38:48.863203 634245 logs.go:270] 1 containers: [16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984] I0507 22:38:48.863249 634245 ssh_runner.go:149] Run: which crictl I0507 22:38:48.865923 634245 logs.go:123] Gathering logs for dmesg ... I0507 22:38:48.865942 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0507 22:38:48.885774 634245 logs.go:123] Gathering logs for etcd [65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb] ... I0507 22:38:48.885802 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb" I0507 22:38:48.918266 634245 logs.go:123] Gathering logs for kube-scheduler [469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce] ... I0507 22:38:48.918292 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce" I0507 22:38:48.946449 634245 logs.go:123] Gathering logs for storage-provisioner [d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9] ... I0507 22:38:48.946478 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9" I0507 22:38:48.969380 634245 logs.go:123] Gathering logs for kube-controller-manager [16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984] ... I0507 22:38:48.969414 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984" I0507 22:38:49.002782 634245 logs.go:123] Gathering logs for containerd ... I0507 22:38:49.002813 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400" I0507 22:38:49.047215 634245 logs.go:123] Gathering logs for container status ... I0507 22:38:49.047248 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0507 22:38:49.072767 634245 logs.go:123] Gathering logs for kubelet ... I0507 22:38:49.072799 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0507 22:38:49.154764 634245 logs.go:123] Gathering logs for describe nodes ... I0507 22:38:49.154794 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0507 22:38:49.245837 634245 logs.go:123] Gathering logs for kube-apiserver [a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a] ... I0507 22:38:49.245867 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a" I0507 22:38:49.308021 634245 logs.go:123] Gathering logs for kube-proxy [313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4] ... I0507 22:38:49.308064 634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4" I0507 22:38:51.837825 634245 system_pods.go:59] 7 kube-system pods found I0507 22:38:51.837862 634245 system_pods.go:61] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:38:51.837871 634245 system_pods.go:61] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:38:51.837879 634245 system_pods.go:61] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:38:51.837887 634245 system_pods.go:61] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:38:51.837892 634245 system_pods.go:61] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:38:51.837898 634245 system_pods.go:61] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:38:51.837903 634245 system_pods.go:61] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:38:51.837909 634245 system_pods.go:74] duration metric: took 3.170936057s to wait for pod list to return data ... I0507 22:38:51.837916 634245 default_sa.go:34] waiting for default service account to be created ... I0507 22:38:51.840456 634245 default_sa.go:45] found service account: "default" I0507 22:38:51.840477 634245 default_sa.go:55] duration metric: took 2.550401ms for default service account to be created ... I0507 22:38:51.840486 634245 system_pods.go:116] waiting for k8s-apps to be running ... I0507 22:38:51.844079 634245 system_pods.go:86] 7 kube-system pods found I0507 22:38:51.844109 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:38:51.844118 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:38:51.844127 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:38:51.844137 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:38:51.844146 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:38:51.844150 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:38:51.844157 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:38:51.844169 634245 retry.go:31] will retry after 305.063636ms: missing components: kube-dns I0507 22:38:52.153973 634245 system_pods.go:86] 7 kube-system pods found I0507 22:38:52.154010 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:38:52.154021 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:38:52.154050 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:38:52.154057 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:38:52.154063 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:38:52.154070 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:38:52.154076 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:38:52.154089 634245 retry.go:31] will retry after 338.212508ms: missing components: kube-dns I0507 22:38:52.497823 634245 system_pods.go:86] 7 kube-system pods found I0507 22:38:52.497872 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:38:52.497882 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:38:52.497892 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:38:52.497899 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:38:52.497914 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:38:52.497922 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:38:52.497934 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:38:52.497948 634245 retry.go:31] will retry after 378.459802ms: missing components: kube-dns I0507 22:38:52.882652 634245 system_pods.go:86] 7 kube-system pods found I0507 22:38:52.882692 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:38:52.882702 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:38:52.882714 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:38:52.882723 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:38:52.882729 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:38:52.882735 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:38:52.882748 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:38:52.882761 634245 retry.go:31] will retry after 469.882201ms: missing components: kube-dns I0507 22:38:56.380031 634245 system_pods.go:86] 7 kube-system pods found I0507 22:38:56.380076 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:38:56.380085 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:38:56.380093 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:38:56.380101 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:38:56.380107 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:38:56.380118 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:38:56.380123 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:38:56.380143 634245 retry.go:31] will retry after 667.365439ms: missing components: kube-dns I0507 22:38:59.331749 634245 system_pods.go:86] 7 kube-system pods found I0507 22:38:59.734392 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:38:59.734409 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:38:59.734421 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:38:59.734429 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:38:59.734437 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:38:59.734444 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:38:59.734450 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:38:59.734469 634245 retry.go:31] will retry after 597.243124ms: missing components: kube-dns I0507 22:39:00.337811 634245 system_pods.go:86] 7 kube-system pods found I0507 22:39:00.337852 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:39:00.337862 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:39:00.337870 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:39:00.337879 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:39:00.337885 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:39:00.337892 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:39:00.337901 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:39:00.337918 634245 retry.go:31] will retry after 789.889932ms: missing components: kube-dns I0507 22:39:01.132625 634245 system_pods.go:86] 7 kube-system pods found I0507 22:39:01.132657 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:39:01.132663 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:39:01.132670 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:39:01.132674 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:39:01.132678 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:39:01.132682 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:39:01.132687 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:39:01.132696 634245 retry.go:31] will retry after 951.868007ms: missing components: kube-dns I0507 22:39:02.090519 634245 system_pods.go:86] 7 kube-system pods found I0507 22:39:02.090553 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:39:02.090559 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:39:02.090566 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:39:02.090572 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:39:02.090578 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:39:02.090584 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:39:02.090590 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:39:02.090602 634245 retry.go:31] will retry after 1.341783893s: missing components: kube-dns I0507 22:39:03.437904 634245 system_pods.go:86] 7 kube-system pods found I0507 22:39:03.437942 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:39:03.437951 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:39:03.437960 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:39:03.437967 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:39:03.437975 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:39:03.437983 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:39:03.437990 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:39:03.438006 634245 retry.go:31] will retry after 1.876813009s: missing components: kube-dns I0507 22:39:05.320060 634245 system_pods.go:86] 7 kube-system pods found I0507 22:39:05.320094 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:39:05.320103 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:39:05.320109 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:39:05.320113 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:39:05.320117 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:39:05.320129 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:39:05.320133 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:39:05.320144 634245 retry.go:31] will retry after 2.6934314s: missing components: kube-dns I0507 22:39:08.018241 634245 system_pods.go:86] 7 kube-system pods found I0507 22:39:08.018273 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:39:08.018279 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:39:08.018287 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:39:08.018292 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:39:08.018296 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:39:08.018300 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:39:08.018309 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:39:08.018319 634245 retry.go:31] will retry after 2.494582248s: missing components: kube-dns I0507 22:39:10.518023 634245 system_pods.go:86] 7 kube-system pods found I0507 22:39:10.518059 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:39:10.518066 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:39:10.518072 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:39:10.518076 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:39:10.518081 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:39:10.518086 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:39:10.518092 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:39:10.518107 634245 retry.go:31] will retry after 3.420895489s: missing components: kube-dns I0507 22:39:13.945189 634245 system_pods.go:86] 7 kube-system pods found I0507 22:39:13.945235 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:39:13.945244 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:39:13.945253 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:39:13.945265 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:39:13.945286 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:39:13.945299 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:39:13.945306 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:39:13.945326 634245 retry.go:31] will retry after 4.133785681s: missing components: kube-dns I0507 22:39:18.084855 634245 system_pods.go:86] 7 kube-system pods found I0507 22:39:18.084897 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:39:18.084908 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:39:18.084917 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:39:18.084925 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:39:18.084933 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:39:18.084941 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:39:18.084947 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:39:18.084962 634245 retry.go:31] will retry after 5.595921491s: missing components: kube-dns I0507 22:39:23.686289 634245 system_pods.go:86] 7 kube-system pods found I0507 22:39:23.686321 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:39:23.686329 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:39:23.686335 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:39:23.686340 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:39:23.686344 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:39:23.686348 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:39:23.686352 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:39:23.686362 634245 retry.go:31] will retry after 6.3346098s: missing components: kube-dns I0507 22:39:30.025780 634245 system_pods.go:86] 7 kube-system pods found I0507 22:39:30.025815 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:39:30.025821 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:39:30.025830 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:39:30.025835 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:39:30.025841 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:39:30.025845 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:39:30.025850 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:39:30.025863 634245 retry.go:31] will retry after 7.962971847s: missing components: kube-dns I0507 22:39:37.993000 634245 system_pods.go:86] 7 kube-system pods found I0507 22:39:37.993042 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:39:37.993056 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:39:37.993066 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:39:37.993072 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:39:37.993078 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:39:37.993083 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:39:37.993089 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:39:37.993103 634245 retry.go:31] will retry after 12.096349863s: missing components: kube-dns I0507 22:39:50.094308 634245 system_pods.go:86] 7 kube-system pods found I0507 22:39:50.094342 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:39:50.094349 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:39:50.094354 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:39:50.094359 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:39:50.094363 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:39:50.094367 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:39:50.094371 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:39:50.094383 634245 retry.go:31] will retry after 11.924857264s: missing components: kube-dns I0507 22:40:02.023877 634245 system_pods.go:86] 7 kube-system pods found I0507 22:40:02.023911 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:40:02.023917 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:40:02.023924 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:40:02.023928 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:40:02.023932 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:40:02.023936 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:40:02.023940 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:40:02.023952 634245 retry.go:31] will retry after 14.772791249s: missing components: kube-dns I0507 22:40:16.802625 634245 system_pods.go:86] 7 kube-system pods found I0507 22:40:16.802657 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:40:16.802663 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:40:16.802669 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:40:16.802675 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:40:16.802679 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:40:16.802683 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:40:16.802687 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:40:16.802699 634245 retry.go:31] will retry after 20.175608267s: missing components: kube-dns I0507 22:40:37.796522 634245 system_pods.go:86] 7 kube-system pods found I0507 22:40:37.796552 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:40:37.796558 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:40:37.796565 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:40:37.796573 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:40:37.796579 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:40:37.796585 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:40:37.796598 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:40:37.796615 634245 retry.go:31] will retry after 28.062855718s: missing components: kube-dns I0507 22:41:05.865244 634245 system_pods.go:86] 7 kube-system pods found I0507 22:41:05.865283 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:41:05.865291 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:41:05.865296 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:41:05.865301 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:41:05.865306 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:41:05.865310 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:41:05.865314 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:41:05.865327 634245 retry.go:31] will retry after 40.022161579s: missing components: kube-dns I0507 22:41:45.895251 634245 system_pods.go:86] 7 kube-system pods found I0507 22:41:45.895286 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:41:45.895294 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:41:45.895300 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:41:45.895306 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:41:45.895313 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:41:45.895319 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:41:45.895324 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:41:45.895351 634245 retry.go:31] will retry after 37.970670965s: missing components: kube-dns I0507 22:42:23.871426 634245 system_pods.go:86] 7 kube-system pods found I0507 22:42:23.871460 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:42:23.871466 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:42:23.871472 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:42:23.871476 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:42:23.871481 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:42:23.871485 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:42:23.871489 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:42:23.871513 634245 retry.go:31] will retry after 47.568379235s: missing components: kube-dns I0507 22:43:11.445319 634245 system_pods.go:86] 7 kube-system pods found I0507 22:43:11.445357 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:43:11.445365 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:43:11.445371 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:43:11.445376 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:43:11.445380 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:43:11.445384 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:43:11.445388 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:43:11.445411 634245 retry.go:31] will retry after 1m7.577191067s: missing components: kube-dns I0507 22:44:19.027342 634245 system_pods.go:86] 7 kube-system pods found I0507 22:44:19.027380 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:44:19.027389 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:44:19.027395 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:44:19.027400 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:44:19.027404 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:44:19.027408 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:44:19.027412 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:44:19.030342 634245 out.go:170] W0507 22:44:19.030464 634245 out.go:235] X Exiting due to GUEST_START: wait 5m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns X Exiting due to GUEST_START: wait 5m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns W0507 22:44:19.030480 634245 out.go:424] no arguments passed for "* \n" - returning raw string W0507 22:44:19.030488 634245 out.go:235] * * W0507 22:44:19.030504 634245 out.go:424] no arguments passed for "* If the above advice does not help, please let us know:\n" - returning raw string W0507 22:44:19.030511 634245 out.go:424] no arguments passed for " https://github.com/kubernetes/minikube/issues/new/choose\n\n" - returning raw string W0507 22:44:19.030516 634245 out.go:424] no arguments passed for "* Please attach the following file to the GitHub issue:\n" - returning raw string W0507 22:44:19.030577 634245 out.go:424] no arguments passed for "* If the above advice does not help, please let us know:\n https://github.com/kubernetes/minikube/issues/new/choose\n\n* Please attach the following file to the GitHub issue:\n* - /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/logs/lastStart.txt\n\n" - returning raw string W0507 22:44:19.032358 634245 out.go:235] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ W0507 22:44:19.032373 634245 out.go:235] │ │ │ │ W0507 22:44:19.032378 634245 out.go:235] │ * If the above advice does not help, please let us know: │ │ * If the above advice does not help, please let us know: │ W0507 22:44:19.032383 634245 out.go:235] │ https://github.com/kubernetes/minikube/issues/new/choose │ │ https://github.com/kubernetes/minikube/issues/new/choose │ W0507 22:44:19.032389 634245 out.go:235] │ │ │ │ W0507 22:44:19.032394 634245 out.go:235] │ * Please attach the following file to the GitHub issue: │ │ * Please attach the following file to the GitHub issue: │ W0507 22:44:19.032399 634245 out.go:235] │ * - /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/logs/lastStart.txt │ │ * - /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/logs/lastStart.txt │ W0507 22:44:19.032408 634245 out.go:235] │ │ │ │ W0507 22:44:19.032412 634245 out.go:235] ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ W0507 22:44:19.032420 634245 out.go:235] I0507 22:44:19.034301 634245 out.go:170] ** /stderr ** net_test.go:85: failed start: exit status 80 === CONT TestNetworkPlugins/group/false net_test.go:192: "false" test finished in 23m45.013776162s, failed=true net_test.go:193: *** TestNetworkPlugins/group/false FAILED at 2021-05-07 22:44:19.071617386 +0000 UTC m=+3292.945849258 helpers_test.go:218: -----------------------post-mortem-------------------------------- helpers_test.go:226: ======> post-mortem[TestNetworkPlugins/group/false]: docker inspect <====== helpers_test.go:227: (dbg) Run: docker inspect false-20210507223341-391940 helpers_test.go:231: (dbg) docker inspect false-20210507223341-391940: -- stdout -- [ { "Id": "2e6a5d95c7793ea545cea301fe14c39d497010e77738f0ff2e032599ccd6890a", "Created": "2021-05-07T22:33:43.178719984Z", "Path": "/usr/local/bin/entrypoint", "Args": [ "/sbin/init" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 634921, "ExitCode": 0, "Error": "", "StartedAt": "2021-05-07T22:33:43.671409107Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:bcd131522525c9c3b8695a8d144be8d177bcd5614ec5397f188115d3be0bbc24", "ResolvConfPath": "/var/lib/docker/containers/2e6a5d95c7793ea545cea301fe14c39d497010e77738f0ff2e032599ccd6890a/resolv.conf", "HostnamePath": "/var/lib/docker/containers/2e6a5d95c7793ea545cea301fe14c39d497010e77738f0ff2e032599ccd6890a/hostname", "HostsPath": "/var/lib/docker/containers/2e6a5d95c7793ea545cea301fe14c39d497010e77738f0ff2e032599ccd6890a/hosts", "LogPath": "/var/lib/docker/containers/2e6a5d95c7793ea545cea301fe14c39d497010e77738f0ff2e032599ccd6890a/2e6a5d95c7793ea545cea301fe14c39d497010e77738f0ff2e032599ccd6890a-json.log", "Name": "/false-20210507223341-391940", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "", "ExecIDs": null, "HostConfig": { "Binds": [ "/lib/modules:/lib/modules:ro", "false-20210507223341-391940:/var" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "false-20210507223341-391940", "PortBindings": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ] }, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "Capabilities": null, "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": true, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "seccomp=unconfined", "apparmor=unconfined", "label=disable" ], "Tmpfs": { "/run": "", "/tmp": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 2000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": null, "ReadonlyPaths": null }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/d11e32d98d21edd0acb8054867e629e51f0c678fdc248b439ba562a6e5f49837-init/diff:/var/lib/docker/overlay2/1e5fa0ed3c3f4bec9b97cabd8aaa709f5915b54c42d527ba46e8ffa9ebcb7f9a/diff:/var/lib/docker/overlay2/00098e5ff94787f022c282488f937bf3694bcc2f80e6f324f2cb94189fadc609/diff:/var/lib/docker/overlay2/0751219afdacf9c8a75fced952b1ad013a8d5b6fbee07adc96e9f305877d0131/diff:/var/lib/docker/overlay2/4fed3d3ec94e4b275966ac815cabeee3572325ca655dcb69e8d31d2051468a10/diff:/var/lib/docker/overlay2/a78b251d86ddd3460876cbc21fef7421c2e76ba3f3198b79f3af7fe8092297f6/diff:/var/lib/docker/overlay2/f3609509e8e931753320e2da77988a3cdd78a58c167b428b96a3aa29971edb5e/diff:/var/lib/docker/overlay2/ebeb53c34330c6713e55bb0d98076f6618884e3bdcd6b888ad1965c69f65b14d/diff:/var/lib/docker/overlay2/1efdecf3c4a2226dd59cc51906581e2326beec3a6b7090c09e437b80c90794b0/diff:/var/lib/docker/overlay2/4c7309d0146fa644c2eb195cb344f6b10894237fb65248ee8391d1790ac7f765/diff:/var/lib/docker/overlay2/424a19d5d18bedf5b29c5b9ffd2c72e8c9e112f2fd414acd046bfa963d0526c7/diff:/var/lib/docker/overlay2/1846dd5e13995c56277d370ac401df36ad796851e8f2315dfab9ff02f487b8fc/diff:/var/lib/docker/overlay2/9393786bec1ad7d470bbbb5c7a94ec2131900fa0c6d2ad39b1039fc6795a2683/diff:/var/lib/docker/overlay2/708ff6a0ffe352ea29dabc0c453ebb09ccede3e24ae9f3fb51e06680ed43e597/diff:/var/lib/docker/overlay2/5a536ba767666ddc007ad059bfa077204239088ff6093831b1b5a0aff36a88ea/diff:/var/lib/docker/overlay2/1d4b0ac5e44186da0f4ee859bb5c23df30087789d88e253dfd57e0ffb21bb88c/diff:/var/lib/docker/overlay2/2b67d6a3428317a2f483420befe919fd660743c5f1494d075867507afe929344/diff:/var/lib/docker/overlay2/abef0f23a7f068f22910d10fcf3ed65c4804f84a4a9aa126a6ac79666f87ab63/diff:/var/lib/docker/overlay2/ec0c450f32e0e573b78fc8537f87456c96a10f353e8bb6e28b4cde51d4b78237/diff:/var/lib/docker/overlay2/ba3b904a6ce3d016a1ef237a88f0e5d4d3b08a8c68e6e4c808b54ffb59e19ee3/diff:/var/lib/docker/overlay2/160d3a3a918b002bb27e1f108db150483cfb4c1383ab9bea5f7d5b983af0f57f/diff:/var/lib/docker/overlay2/ed771b935b96f93ce682cdd9d22155225a918436de84fb5d56eb6214e36d7e27/diff:/var/lib/docker/overlay2/a298f74d3f51b9716985e7c6a84a4fe16a9badceeb4fbcc5847e9313a496c203/diff:/var/lib/docker/overlay2/7f4ddade1e222fcfd5747b07b270a54575ecfdbdf23dc72c6aa8984cb14b4f6b/diff:/var/lib/docker/overlay2/8522467e2a2b9517f0e9fe828bf20d40830fb4364323ea1b17c1ae43e68f1633/diff:/var/lib/docker/overlay2/7b8ac1e2dcffd2cd29a0fe315f23ba717abac176d21484016b19e33e1ceb3f15/diff:/var/lib/docker/overlay2/219fbaff646669aefdda08db39e5c449632d42e036ba372e6fbfd2e74d05895c/diff:/var/lib/docker/overlay2/169017ab906e8cd6c768272fbbd27db4564b7ea84520773194f7b8d1c5725ce4/diff:/var/lib/docker/overlay2/3f2355256f7a67382c67f2079a79f9a3568cd4aac75dcb8e549d040ea3e3801c/diff:/var/lib/docker/overlay2/049eedb4ea37711e06782dfa1648c66d0e215e8b8eb540da6bd9b7729e88b4c6/diff:/var/lib/docker/overlay2/685ece42c012e8b988affc555e627ea46a42003f7fb6511dc68fb9da6c515fd8/diff:/var/lib/docker/overlay2/224f8f237d1ebeb57711074d5b9338b377abc164e67d85cd8b48264062798e8a/diff:/var/lib/docker/overlay2/280191c44865a7db266046c55f36cee27c985b893bca0a97310569a5df684c8a/diff:/var/lib/docker/overlay2/2a04e90c25bcb0264edd485b59f54c8e6c28a2d0c63f696590f1876b164e0ad8/diff:/var/lib/docker/overlay2/9c5536844b05a6fcc7c6de17ba2cd59669716e44474ac06421119d86c04f197e/diff:/var/lib/docker/overlay2/0db732ad07139625742260350f06f46f9978ae313af26f4afdab09884382542c/diff:/var/lib/docker/overlay2/d7e4510c4ab4dcfcd652b63a086da8e4f53866cf61cc72dfacd6e24a7ba895ac/diff", "MergedDir": "/var/lib/docker/overlay2/d11e32d98d21edd0acb8054867e629e51f0c678fdc248b439ba562a6e5f49837/merged", "UpperDir": "/var/lib/docker/overlay2/d11e32d98d21edd0acb8054867e629e51f0c678fdc248b439ba562a6e5f49837/diff", "WorkDir": "/var/lib/docker/overlay2/d11e32d98d21edd0acb8054867e629e51f0c678fdc248b439ba562a6e5f49837/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "bind", "Source": "/lib/modules", "Destination": "/lib/modules", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "volume", "Name": "false-20210507223341-391940", "Source": "/var/lib/docker/volumes/false-20210507223341-391940/_data", "Destination": "/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" } ], "Config": { "Hostname": "false-20210507223341-391940", "Domainname": "", "User": "root", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "22/tcp": {}, "2376/tcp": {}, "32443/tcp": {}, "5000/tcp": {}, "8443/tcp": {} }, "Tty": true, "OpenStdin": false, "StdinOnce": false, "Env": [ "container=docker", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": null, "Image": "gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/usr/local/bin/entrypoint", "/sbin/init" ], "OnBuild": null, "Labels": { "created_by.minikube.sigs.k8s.io": "true", "mode.minikube.sigs.k8s.io": "false-20210507223341-391940", "name.minikube.sigs.k8s.io": "false-20210507223341-391940", "role.minikube.sigs.k8s.io": "" }, "StopSignal": "SIGRTMIN+3" }, "NetworkSettings": { "Bridge": "", "SandboxID": "19ec5b53546486f30a7bb10c4549ba797d0925faa09714e8d174ca7cc72231d1", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33291" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33290" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33287" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33289" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33288" } ] }, "SandboxKey": "/var/run/docker/netns/19ec5b535464", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "false-20210507223341-391940": { "IPAMConfig": { "IPv4Address": "192.168.67.2" }, "Links": null, "Aliases": [ "2e6a5d95c779" ], "NetworkID": "dd4724a55dc02b50e40b077fd1ec01d20253a8e1582046671220604e8785bc68", "EndpointID": "9c1ff82346db5de438f0b7abf33bdf7d99f380f0d6bf7668484bb75687917427", "Gateway": "192.168.67.1", "IPAddress": "192.168.67.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:43:02", "DriverOpts": null } } } } ] -- /stdout -- helpers_test.go:235: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p false-20210507223341-391940 -n false-20210507223341-391940 helpers_test.go:240: <<< TestNetworkPlugins/group/false FAILED: start of post-mortem logs <<< helpers_test.go:241: ======> post-mortem[TestNetworkPlugins/group/false]: minikube logs <====== helpers_test.go:243: (dbg) Run: out/minikube-linux-amd64 -p false-20210507223341-391940 logs -n 25 helpers_test.go:248: TestNetworkPlugins/group/false logs: -- stdout -- * * ==> Audit <== * |---------|--------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|--------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------| | unpause | -p | default-k8s-different-port-20210507222942-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:34:49 UTC | Fri, 07 May 2021 22:34:50 UTC | | | default-k8s-different-port-20210507222942-391940 | | | | | | | | --alsologtostderr -v=1 | | | | | | | delete | -p | default-k8s-different-port-20210507222942-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:34:51 UTC | Fri, 07 May 2021 22:34:54 UTC | | | default-k8s-different-port-20210507222942-391940 | | | | | | | delete | -p | default-k8s-different-port-20210507222942-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:34:54 UTC | Fri, 07 May 2021 22:34:55 UTC | | | default-k8s-different-port-20210507222942-391940 | | | | | | | start | -p auto-20210507223250-391940 | auto-20210507223250-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:32:50 UTC | Fri, 07 May 2021 22:35:18 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --driver=docker | | | | | | | | --container-runtime=containerd | | | | | | | ssh | -p auto-20210507223250-391940 | auto-20210507223250-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:35:18 UTC | Fri, 07 May 2021 22:35:18 UTC | | | pgrep -a kubelet | | | | | | | start | -p | cilium-20210507223455-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:34:55 UTC | Fri, 07 May 2021 22:37:15 UTC | | | cilium-20210507223455-391940 | | | | | | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=cilium --driver=docker | | | | | | | | --container-runtime=containerd | | | | | | | ssh | -p | cilium-20210507223455-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:37:20 UTC | Fri, 07 May 2021 22:37:21 UTC | | | cilium-20210507223455-391940 | | | | | | | | pgrep -a kubelet | | | | | | | delete | -p | cilium-20210507223455-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:37:29 UTC | Fri, 07 May 2021 22:37:33 UTC | | | cilium-20210507223455-391940 | | | | | | | -p | auto-20210507223250-391940 | auto-20210507223250-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:38:09 UTC | Fri, 07 May 2021 22:38:10 UTC | | | logs -n 25 | | | | | | | delete | -p auto-20210507223250-391940 | auto-20210507223250-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:38:11 UTC | Fri, 07 May 2021 22:38:14 UTC | | start | -p | calico-20210507223733-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:37:33 UTC | Fri, 07 May 2021 22:39:58 UTC | | | calico-20210507223733-391940 | | | | | | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=calico --driver=docker | | | | | | | | --container-runtime=containerd | | | | | | | ssh | -p | calico-20210507223733-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:40:03 UTC | Fri, 07 May 2021 22:40:03 UTC | | | calico-20210507223733-391940 | | | | | | | | pgrep -a kubelet | | | | | | | start | -p | custom-weave-20210507223739-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:37:39 UTC | Fri, 07 May 2021 22:40:11 UTC | | | custom-weave-20210507223739-391940 | | | | | | | | --memory=2048 --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=testdata/weavenet.yaml | | | | | | | | --driver=docker | | | | | | | | --container-runtime=containerd | | | | | | | ssh | -p | custom-weave-20210507223739-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:40:11 UTC | Fri, 07 May 2021 22:40:12 UTC | | | custom-weave-20210507223739-391940 | | | | | | | | pgrep -a kubelet | | | | | | | delete | -p | calico-20210507223733-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:40:13 UTC | Fri, 07 May 2021 22:40:17 UTC | | | calico-20210507223733-391940 | | | | | | | delete | -p | custom-weave-20210507223739-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:40:20 UTC | Fri, 07 May 2021 22:40:24 UTC | | | custom-weave-20210507223739-391940 | | | | | | | start | -p | enable-default-cni-20210507223814-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:38:14 UTC | Fri, 07 May 2021 22:40:30 UTC | | | enable-default-cni-20210507223814-391940 | | | | | | | | --memory=2048 --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --enable-default-cni=true | | | | | | | | --driver=docker | | | | | | | | --container-runtime=containerd | | | | | | | ssh | -p | enable-default-cni-20210507223814-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:40:30 UTC | Fri, 07 May 2021 22:40:30 UTC | | | enable-default-cni-20210507223814-391940 | | | | | | | | pgrep -a kubelet | | | | | | | delete | -p | enable-default-cni-20210507223814-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:40:49 UTC | Fri, 07 May 2021 22:40:52 UTC | | | enable-default-cni-20210507223814-391940 | | | | | | | start | -p | kindnet-20210507224017-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:40:17 UTC | Fri, 07 May 2021 22:42:19 UTC | | | kindnet-20210507224017-391940 | | | | | | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=kindnet --driver=docker | | | | | | | | --container-runtime=containerd | | | | | | | ssh | -p | kindnet-20210507224017-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:42:24 UTC | Fri, 07 May 2021 22:42:25 UTC | | | kindnet-20210507224017-391940 | | | | | | | | pgrep -a kubelet | | | | | | | delete | -p | kindnet-20210507224017-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:42:34 UTC | Fri, 07 May 2021 22:42:37 UTC | | | kindnet-20210507224017-391940 | | | | | | | start | -p | bridge-20210507224024-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:40:24 UTC | Fri, 07 May 2021 22:43:07 UTC | | | bridge-20210507224024-391940 | | | | | | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=bridge --driver=docker | | | | | | | | --container-runtime=containerd | | | | | | | ssh | -p | bridge-20210507224024-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:43:07 UTC | Fri, 07 May 2021 22:43:07 UTC | | | bridge-20210507224024-391940 | | | | | | | | pgrep -a kubelet | | | | | | | delete | -p | bridge-20210507224024-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:43:16 UTC | Fri, 07 May 2021 22:43:19 UTC | | | bridge-20210507224024-391940 | | | | | | |---------|--------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2021/05/07 22:40:52 Running on machine: debian-jenkins-agent-11 Binary: Built with gc go1.16.1 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0507 22:40:52.878518 672811 out.go:291] Setting OutFile to fd 1 ... I0507 22:40:52.878673 672811 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:40:52.878682 672811 out.go:304] Setting ErrFile to fd 2... I0507 22:40:52.878685 672811 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:40:52.878775 672811 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/bin I0507 22:40:52.879029 672811 out.go:298] Setting JSON to false I0507 22:40:52.914708 672811 start.go:108] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":12020,"bootTime":1620415232,"procs":350,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"} I0507 22:40:52.914791 672811 start.go:118] virtualization: kvm guest I0507 22:40:52.917552 672811 out.go:170] * [kubenet-20210507224052-391940] minikube v1.20.0 on Debian 9.13 (kvm/amd64) I0507 22:40:52.919004 672811 out.go:170] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig I0507 22:40:52.920381 672811 out.go:170] - MINIKUBE_BIN=out/minikube-linux-amd64 I0507 22:40:52.921826 672811 out.go:170] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube I0507 22:40:52.923176 672811 out.go:170] - MINIKUBE_LOCATION=master I0507 22:40:52.923813 672811 driver.go:322] Setting default libvirt URI to qemu:///system I0507 22:40:52.971346 672811 docker.go:119] docker version: linux-19.03.15 I0507 22:40:52.971454 672811 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0507 22:40:53.057850 672811 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:79 SystemTime:2021-05-07 22:40:53.008217117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0507 22:40:53.057941 672811 docker.go:225] overlay module found I0507 22:40:53.060186 672811 out.go:170] * Using the docker driver based on user configuration I0507 22:40:53.060214 672811 start.go:276] selected driver: docker I0507 22:40:53.060222 672811 start.go:718] validating driver "docker" against I0507 22:40:53.060244 672811 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} W0507 22:40:53.060288 672811 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0507 22:40:53.060303 672811 out.go:424] no arguments passed for "! Your cgroup does not allow setting memory.\n" - returning raw string W0507 22:40:53.060323 672811 out.go:235] ! Your cgroup does not allow setting memory. W0507 22:40:53.060334 672811 out.go:424] no arguments passed for " - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities\n" - returning raw string I0507 22:40:53.061888 672811 out.go:170] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0507 22:40:53.062981 672811 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0507 22:40:53.161855 672811 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:79 SystemTime:2021-05-07 22:40:53.100417898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0507 22:40:53.162014 672811 start_flags.go:259] no existing cluster config was found, will generate one from the flags I0507 22:40:53.162237 672811 start_flags.go:733] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0507 22:40:53.162265 672811 cni.go:89] network plugin configured as "kubenet", returning disabled I0507 22:40:53.162274 672811 start_flags.go:273] config: {Name:kubenet-20210507224052-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:kubenet-20210507224052-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0507 22:40:53.165187 672811 out.go:170] * Starting control plane node kubenet-20210507224052-391940 in cluster kubenet-20210507224052-391940 I0507 22:40:53.165236 672811 cache.go:111] Beginning downloading kic base image for docker with containerd W0507 22:40:53.165246 672811 out.go:424] no arguments passed for "* Pulling base image ...\n" - returning raw string W0507 22:40:53.165261 672811 out.go:424] no arguments passed for "* Pulling base image ...\n" - returning raw string I0507 22:40:53.166925 672811 out.go:170] * Pulling base image ... I0507 22:40:53.166966 672811 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd I0507 22:40:53.167001 672811 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 I0507 22:40:53.167016 672811 cache.go:54] Caching tarball of preloaded images I0507 22:40:53.167026 672811 image.go:116] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory I0507 22:40:53.167043 672811 preload.go:132] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download I0507 22:40:53.167054 672811 cache.go:57] Finished verifying existence of preloaded tar for v1.20.2 on containerd I0507 22:40:53.167059 672811 image.go:119] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory, skipping pull I0507 22:40:53.167071 672811 cache.go:131] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in cache, skipping pull I0507 22:40:53.167104 672811 image.go:130] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon I0507 22:40:53.167176 672811 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/config.json ... I0507 22:40:53.167206 672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/config.json: {Name:mk6f7d3b17ed614f6ce609cdf1a5d1f675228263 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:40:53.247777 672811 image.go:134] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon, skipping pull I0507 22:40:53.247803 672811 cache.go:155] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in daemon, skipping pull I0507 22:40:53.247832 672811 cache.go:194] Successfully downloaded all kic artifacts I0507 22:40:53.247867 672811 start.go:313] acquiring machines lock for kubenet-20210507224052-391940: {Name:mk343db27c7581f71b72b6b890cfa139aa788b8d Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0507 22:40:53.247996 672811 start.go:317] acquired machines lock for "kubenet-20210507224052-391940" in 107.964µs I0507 22:40:53.248026 672811 start.go:89] Provisioning new machine with config: &{Name:kubenet-20210507224052-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:kubenet-20210507224052-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} I0507 22:40:53.248124 672811 start.go:126] createHost starting for "" (driver="docker") I0507 22:40:52.510485 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:53.011326 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:53.510551 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:54.010720 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:54.510431 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:55.011045 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:55.510393 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:56.010646 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:56.511146 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:57.010497 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:53.250851 672811 out.go:197] * Creating docker container (CPUs=2, Memory=2048MB) ... I0507 22:40:53.251111 672811 start.go:160] libmachine.API.Create for "kubenet-20210507224052-391940" (driver="docker") I0507 22:40:53.251145 672811 client.go:168] LocalClient.Create starting I0507 22:40:53.251244 672811 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem I0507 22:40:53.251275 672811 main.go:128] libmachine: Decoding PEM data... I0507 22:40:53.251311 672811 main.go:128] libmachine: Parsing certificate... I0507 22:40:53.251453 672811 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem I0507 22:40:53.251479 672811 main.go:128] libmachine: Decoding PEM data... I0507 22:40:53.251496 672811 main.go:128] libmachine: Parsing certificate... I0507 22:40:53.251894 672811 cli_runner.go:115] Run: docker network inspect kubenet-20210507224052-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0507 22:40:53.291644 672811 cli_runner.go:162] docker network inspect kubenet-20210507224052-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0507 22:40:53.291722 672811 network_create.go:249] running [docker network inspect kubenet-20210507224052-391940] to gather additional debugging logs... I0507 22:40:53.291743 672811 cli_runner.go:115] Run: docker network inspect kubenet-20210507224052-391940 W0507 22:40:53.341497 672811 cli_runner.go:162] docker network inspect kubenet-20210507224052-391940 returned with exit code 1 I0507 22:40:53.341550 672811 network_create.go:252] error running [docker network inspect kubenet-20210507224052-391940]: docker network inspect kubenet-20210507224052-391940: exit status 1 stdout: [] stderr: Error: No such network: kubenet-20210507224052-391940 I0507 22:40:53.341581 672811 network_create.go:254] output of [docker network inspect kubenet-20210507224052-391940]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: kubenet-20210507224052-391940 ** /stderr ** I0507 22:40:53.342256 672811 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0507 22:40:53.385054 672811 network.go:215] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b7a55e9e83b1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:be:99:f6:89}} I0507 22:40:53.386400 672811 network.go:263] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc000374028] misses:0} I0507 22:40:53.386443 672811 network.go:210] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0507 22:40:53.386463 672811 network_create.go:100] attempt to create docker network kubenet-20210507224052-391940 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ... I0507 22:40:53.386518 672811 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20210507224052-391940 I0507 22:40:53.469239 672811 network_create.go:84] docker network kubenet-20210507224052-391940 192.168.58.0/24 created I0507 22:40:53.469289 672811 kic.go:106] calculated static IP "192.168.58.2" for the "kubenet-20210507224052-391940" container I0507 22:40:53.469371 672811 cli_runner.go:115] Run: docker ps -a --format {{.Names}} I0507 22:40:53.510838 672811 cli_runner.go:115] Run: docker volume create kubenet-20210507224052-391940 --label name.minikube.sigs.k8s.io=kubenet-20210507224052-391940 --label created_by.minikube.sigs.k8s.io=true I0507 22:40:53.559162 672811 oci.go:102] Successfully created a docker volume kubenet-20210507224052-391940 I0507 22:40:53.559286 672811 cli_runner.go:115] Run: docker run --rm --name kubenet-20210507224052-391940-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20210507224052-391940 --entrypoint /usr/bin/test -v kubenet-20210507224052-391940:/var gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib I0507 22:40:54.328995 672811 oci.go:106] Successfully prepared a docker volume kubenet-20210507224052-391940 W0507 22:40:54.329069 672811 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0507 22:40:54.329079 672811 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0507 22:40:54.329130 672811 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'" I0507 22:40:54.329143 672811 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd I0507 22:40:54.329178 672811 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 I0507 22:40:54.329192 672811 kic.go:179] Starting extracting preloaded images to volume ... I0507 22:40:54.329240 672811 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20210507224052-391940:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir I0507 22:40:54.427070 672811 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-20210507224052-391940 --name kubenet-20210507224052-391940 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20210507224052-391940 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-20210507224052-391940 --network kubenet-20210507224052-391940 --ip 192.168.58.2 --volume kubenet-20210507224052-391940:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e I0507 22:40:55.043077 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Running}} I0507 22:40:55.107025 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}} I0507 22:40:55.165720 672811 cli_runner.go:115] Run: docker exec kubenet-20210507224052-391940 stat /var/lib/dpkg/alternatives/iptables I0507 22:40:55.317730 672811 oci.go:278] the created container "kubenet-20210507224052-391940" has a running status. I0507 22:40:55.317785 672811 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa... I0507 22:40:55.465459 672811 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0507 22:40:55.874845 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}} I0507 22:40:55.926608 672811 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0507 22:40:55.926628 672811 kic_runner.go:115] Args: [docker exec --privileged kubenet-20210507224052-391940 chown docker:docker /home/docker/.ssh/authorized_keys] W0507 22:41:01.540230 668555 out.go:424] no arguments passed for " - Generating certificates and keys ..." - returning raw string W0507 22:41:01.540268 668555 out.go:424] no arguments passed for " - Generating certificates and keys ..." - returning raw string I0507 22:41:01.541677 668555 out.go:197] - Generating certificates and keys ... W0507 22:41:01.543231 668555 out.go:424] no arguments passed for " - Booting up control plane ..." - returning raw string W0507 22:41:01.543251 668555 out.go:424] no arguments passed for " - Booting up control plane ..." - returning raw string I0507 22:41:01.544669 668555 out.go:197] - Booting up control plane ... W0507 22:41:01.545900 668555 out.go:424] no arguments passed for " - Configuring RBAC rules ..." - returning raw string W0507 22:41:01.545925 668555 out.go:424] no arguments passed for " - Configuring RBAC rules ..." - returning raw string I0507 22:41:01.547547 668555 out.go:197] - Configuring RBAC rules ... I0507 22:41:01.550103 668555 cni.go:93] Creating CNI manager for "bridge" I0507 22:40:57.511204 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:58.010592 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:58.510979 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:59.010657 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:59.511181 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:00.010935 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:00.510644 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:01.010844 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:01.510822 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:02.010401 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:59.038214 672811 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20210507224052-391940:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir: (4.708855042s) I0507 22:40:59.038245 672811 kic.go:188] duration metric: took 4.709051 seconds to extract preloaded images to volume I0507 22:40:59.038321 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}} I0507 22:40:59.081058 672811 machine.go:88] provisioning docker machine ... I0507 22:40:59.081096 672811 ubuntu.go:169] provisioning hostname "kubenet-20210507224052-391940" I0507 22:40:59.081153 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:40:59.119701 672811 main.go:128] libmachine: Using SSH client type: native I0507 22:40:59.119896 672811 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 33326 } I0507 22:40:59.119916 672811 main.go:128] libmachine: About to run SSH command: sudo hostname kubenet-20210507224052-391940 && echo "kubenet-20210507224052-391940" | sudo tee /etc/hostname I0507 22:40:59.251144 672811 main.go:128] libmachine: SSH cmd err, output: : kubenet-20210507224052-391940 I0507 22:40:59.251212 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:40:59.290133 672811 main.go:128] libmachine: Using SSH client type: native I0507 22:40:59.290316 672811 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 33326 } I0507 22:40:59.290356 672811 main.go:128] libmachine: About to run SSH command: if ! grep -xq '.*\skubenet-20210507224052-391940' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-20210507224052-391940/g' /etc/hosts; else echo '127.0.1.1 kubenet-20210507224052-391940' | sudo tee -a /etc/hosts; fi fi I0507 22:40:59.403817 672811 main.go:128] libmachine: SSH cmd err, output: : I0507 22:40:59.403851 672811 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube} I0507 22:40:59.403874 672811 ubuntu.go:177] setting up certificates I0507 22:40:59.403887 672811 provision.go:83] configureAuth start I0507 22:40:59.403966 672811 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20210507224052-391940 I0507 22:40:59.447361 672811 provision.go:137] copyHostCerts I0507 22:40:59.447423 672811 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem, removing ... I0507 22:40:59.447435 672811 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem I0507 22:40:59.447489 672811 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem (1078 bytes) I0507 22:40:59.447657 672811 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem, removing ... I0507 22:40:59.447677 672811 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem I0507 22:40:59.447707 672811 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem (1123 bytes) I0507 22:40:59.447795 672811 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem, removing ... I0507 22:40:59.447805 672811 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem I0507 22:40:59.447843 672811 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem (1675 bytes) I0507 22:40:59.447895 672811 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem org=jenkins.kubenet-20210507224052-391940 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube kubenet-20210507224052-391940] I0507 22:40:59.852941 672811 provision.go:165] copyRemoteCerts I0507 22:40:59.853012 672811 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0507 22:40:59.853074 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:40:59.896021 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:40:59.978856 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes) I0507 22:40:59.995226 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes) I0507 22:41:00.011913 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0507 22:41:00.027622 672811 provision.go:86] duration metric: configureAuth took 623.719966ms I0507 22:41:00.027644 672811 ubuntu.go:193] setting minikube options for container-runtime I0507 22:41:00.027808 672811 machine.go:91] provisioned docker machine in 946.729843ms I0507 22:41:00.027821 672811 client.go:171] LocalClient.Create took 6.776670216s I0507 22:41:00.027841 672811 start.go:168] duration metric: libmachine.API.Create for "kubenet-20210507224052-391940" took 6.776727752s I0507 22:41:00.027849 672811 start.go:267] post-start starting for "kubenet-20210507224052-391940" (driver="docker") I0507 22:41:00.027855 672811 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0507 22:41:00.027897 672811 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0507 22:41:00.027946 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:41:00.075235 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:41:00.166719 672811 ssh_runner.go:149] Run: cat /etc/os-release I0507 22:41:00.169362 672811 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0507 22:41:00.169391 672811 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0507 22:41:00.169407 672811 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0507 22:41:00.169419 672811 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0507 22:41:00.169433 672811 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/addons for local assets ... I0507 22:41:00.169503 672811 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/files for local assets ... I0507 22:41:00.169623 672811 start.go:270] post-start completed in 141.767397ms I0507 22:41:00.169915 672811 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20210507224052-391940 I0507 22:41:00.210576 672811 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/config.json ... I0507 22:41:00.210783 672811 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0507 22:41:00.210835 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:41:00.247709 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:41:00.327660 672811 start.go:129] duration metric: createHost completed in 7.07952255s I0507 22:41:00.327686 672811 start.go:80] releasing machines lock for "kubenet-20210507224052-391940", held for 7.079675771s I0507 22:41:00.327754 672811 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20210507224052-391940 I0507 22:41:00.367166 672811 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/ I0507 22:41:00.367177 672811 ssh_runner.go:149] Run: systemctl --version I0507 22:41:00.367228 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:41:00.367248 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:41:00.408143 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:41:00.408527 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:41:00.487269 672811 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio I0507 22:41:00.537919 672811 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker I0507 22:41:00.547201 672811 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket I0507 22:41:00.564793 672811 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service I0507 22:41:00.574599 672811 ssh_runner.go:149] Run: sudo systemctl disable docker.socket I0507 22:41:00.638969 672811 ssh_runner.go:149] Run: sudo systemctl mask docker.service I0507 22:41:00.698972 672811 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker I0507 22:41:00.709630 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0507 22:41:00.723315 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjIiCiAgICBzdGF0c19jb2xsZWN0X3BlcmlvZCA9IDEwCiAgICBzeXN0ZW1kX2Nncm91cCA9IGZhbHNlCiAgICBlbmFibGVfdGxzX3N0cmVhbWluZyA9IGZhbHNlCiAgICBtYXhfY29udGFpbmVyX2xvZ19saW5lX3NpemUgPSAxNjM4NAogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgbm9fcGl2b3QgPSB0cnVlCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW50aW1lLnYxLmxpbnV4IgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMubGludXhdCiAgICBzaGltID0gImNvbnRhaW5lcmQtc2hpbSIKICAgIHJ1bnRpbWUgPSAicnVuYyIKICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICBub19zaGltID0gZmFsc2UKICAgIHNoaW1fZGVidWcgPSBmYWxzZQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml" I0507 22:41:00.737455 672811 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0507 22:41:00.744876 672811 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255 stdout: stderr: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory I0507 22:41:00.744933 672811 ssh_runner.go:149] Run: sudo modprobe br_netfilter I0507 22:41:00.753834 672811 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0507 22:41:00.761420 672811 ssh_runner.go:149] Run: sudo systemctl daemon-reload I0507 22:41:00.827226 672811 ssh_runner.go:149] Run: sudo systemctl restart containerd I0507 22:41:00.892592 672811 start.go:368] Will wait 60s for socket path /run/containerd/containerd.sock I0507 22:41:00.892666 672811 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock I0507 22:41:00.896809 672811 start.go:393] Will wait 60s for crictl version I0507 22:41:00.896869 672811 ssh_runner.go:149] Run: sudo crictl version I0507 22:41:00.922312 672811 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1 stdout: stderr: time="2021-05-07T22:41:00Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet" I0507 22:41:01.551679 668555 out.go:170] * Configuring bridge CNI (Container Networking Interface) ... I0507 22:41:01.551743 668555 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d I0507 22:41:01.559637 668555 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes) I0507 22:41:01.574185 668555 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0507 22:41:01.574279 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:01.574292 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=c31bd57f93d45726e4bd30607374f8c720e70e95 minikube.k8s.io/name=bridge-20210507224024-391940 minikube.k8s.io/updated_at=2021_05_07T22_41_01_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:01.648795 668555 ops.go:34] apiserver oom_adj: -16 I0507 22:41:01.648816 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:02.464408 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:02.964413 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:03.464807 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:03.964629 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:02.510806 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:03.010976 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:04.084323 666230 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.073303817s) I0507 22:41:04.510723 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:05.011061 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:05.510498 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:05.650019 666230 kubeadm.go:977] duration metric: took 15.318226491s to wait for elevateKubeSystemPrivileges. I0507 22:41:05.650056 666230 kubeadm.go:383] StartCluster complete in 39.827577068s I0507 22:41:05.650087 666230 settings.go:142] acquiring lock: {Name:mkbc12d45ea1a96167acb2e3885011008220fc1e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:05.650199 666230 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig I0507 22:41:05.652403 666230 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig: {Name:mk53c460e0a047a0806c95f27e36717b9bf9f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:06.169003 666230 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20210507224017-391940" rescaled to 1 I0507 22:41:06.169047 666230 start.go:201] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} W0507 22:41:06.169072 666230 out.go:424] no arguments passed for "* Verifying Kubernetes components...\n" - returning raw string W0507 22:41:06.169089 666230 out.go:424] no arguments passed for "* Verifying Kubernetes components...\n" - returning raw string I0507 22:41:06.171052 666230 out.go:170] * Verifying Kubernetes components... I0507 22:41:06.169091 666230 addons.go:328] enableAddons start: toEnable=map[], additional=[] I0507 22:41:06.171170 666230 addons.go:55] Setting storage-provisioner=true in profile "kindnet-20210507224017-391940" I0507 22:41:06.171203 666230 addons.go:131] Setting addon storage-provisioner=true in "kindnet-20210507224017-391940" I0507 22:41:06.169369 666230 cache.go:108] acquiring lock: {Name:mk66f3ed174a0fda2e3a4fd9a235ceef9553bc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:} W0507 22:41:06.171217 666230 addons.go:140] addon storage-provisioner should already be in state true I0507 22:41:06.171236 666230 host.go:66] Checking if "kindnet-20210507224017-391940" exists ... I0507 22:41:06.171243 666230 addons.go:55] Setting default-storageclass=true in profile "kindnet-20210507224017-391940" I0507 22:41:06.171257 666230 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20210507224017-391940" I0507 22:41:06.171107 666230 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0507 22:41:06.171307 666230 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 exists I0507 22:41:06.171446 666230 cache.go:97] cache image "minikube-local-cache-test:functional-20210507215728-391940" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940" took 2.079162ms I0507 22:41:06.171484 666230 cache.go:81] save to tar file minikube-local-cache-test:functional-20210507215728-391940 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 succeeded I0507 22:41:06.171529 666230 cache.go:88] Successfully saved all images to host disk. I0507 22:41:06.171641 666230 cli_runner.go:115] Run: docker container inspect kindnet-20210507224017-391940 --format={{.State.Status}} I0507 22:41:06.171838 666230 cli_runner.go:115] Run: docker container inspect kindnet-20210507224017-391940 --format={{.State.Status}} I0507 22:41:06.171967 666230 cli_runner.go:115] Run: docker container inspect kindnet-20210507224017-391940 --format={{.State.Status}} I0507 22:41:06.186678 666230 node_ready.go:35] waiting up to 5m0s for node "kindnet-20210507224017-391940" to be "Ready" ... I0507 22:41:05.865244 634245 system_pods.go:86] 7 kube-system pods found I0507 22:41:05.865283 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:41:05.865291 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:41:05.865296 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:41:05.865301 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:41:05.865306 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:41:05.865310 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:41:05.865314 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:41:05.865327 634245 retry.go:31] will retry after 40.022161579s: missing components: kube-dns I0507 22:41:06.223381 666230 out.go:170] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0507 22:41:06.223643 666230 addons.go:261] installing /etc/kubernetes/addons/storage-provisioner.yaml I0507 22:41:06.223663 666230 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0507 22:41:06.223745 666230 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210507224017-391940 I0507 22:41:06.229977 666230 ssh_runner.go:149] Run: sudo crictl images --output json I0507 22:41:06.230023 666230 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210507224017-391940 I0507 22:41:06.235459 666230 addons.go:131] Setting addon default-storageclass=true in "kindnet-20210507224017-391940" W0507 22:41:06.235494 666230 addons.go:140] addon default-storageclass should already be in state true I0507 22:41:06.235598 666230 host.go:66] Checking if "kindnet-20210507224017-391940" exists ... I0507 22:41:06.236177 666230 cli_runner.go:115] Run: docker container inspect kindnet-20210507224017-391940 --format={{.State.Status}} I0507 22:41:06.277133 666230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33316 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kindnet-20210507224017-391940/id_rsa Username:docker} I0507 22:41:06.278233 666230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33316 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kindnet-20210507224017-391940/id_rsa Username:docker} I0507 22:41:06.285800 666230 addons.go:261] installing /etc/kubernetes/addons/storageclass.yaml I0507 22:41:06.285827 666230 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0507 22:41:06.285885 666230 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210507224017-391940 I0507 22:41:06.325091 666230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33316 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kindnet-20210507224017-391940/id_rsa Username:docker} I0507 22:41:06.368756 666230 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0507 22:41:06.380026 666230 containerd.go:567] couldn't find preloaded image for "docker.io/minikube-local-cache-test:functional-20210507215728-391940". assuming images are not preloaded. I0507 22:41:06.380052 666230 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210507215728-391940] I0507 22:41:06.380113 666230 image.go:320] retrieving image: minikube-local-cache-test:functional-20210507215728-391940 I0507 22:41:06.380159 666230 image.go:326] checking repository: index.docker.io/library/minikube-local-cache-test I0507 22:41:06.436838 666230 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml W0507 22:41:06.616626 666230 image.go:333] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details) I0507 22:41:06.616662 666230 image.go:334] short name: minikube-local-cache-test:functional-20210507215728-391940 I0507 22:41:06.617454 666230 image.go:362] daemon lookup for minikube-local-cache-test:functional-20210507215728-391940: Error response from daemon: reference does not exist I0507 22:41:06.680519 666230 out.go:170] * Enabled addons: storage-provisioner, default-storageclass I0507 22:41:06.680546 666230 addons.go:330] enableAddons completed in 511.473083ms W0507 22:41:06.761518 666230 image.go:372] authn lookup for minikube-local-cache-test:functional-20210507215728-391940 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:41:06.906593 666230 image.go:376] remote lookup for minikube-local-cache-test:functional-20210507215728-391940: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:41:06.906638 666230 image.go:98] error retrieve Image minikube-local-cache-test:functional-20210507215728-391940 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:41:06.906666 666230 cache_images.go:106] "minikube-local-cache-test:functional-20210507215728-391940" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210507215728-391940 I0507 22:41:06.906687 666230 cache_images.go:271] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:06.906785 666230 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:06.909940 666230 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940': No such file or directory I0507 22:41:06.909963 666230 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 (5120 bytes) I0507 22:41:06.927085 666230 containerd.go:267] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:06.927131 666230 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:07.049626 666230 cache_images.go:293] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 from cache I0507 22:41:07.049657 666230 cache_images.go:113] Successfully loaded all cached images I0507 22:41:07.049665 666230 cache_images.go:82] LoadImages completed in 669.60307ms I0507 22:41:07.049676 666230 cache_images.go:252] succeeded pushing to: kindnet-20210507224017-391940 I0507 22:41:07.049680 666230 cache_images.go:253] failed pushing to: I0507 22:41:04.464364 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:04.964435 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:05.464578 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:05.964851 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:06.464494 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:06.964340 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:07.464110 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:07.964700 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:08.464459 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:08.964649 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:11.971610 672811 ssh_runner.go:149] Run: sudo crictl version I0507 22:41:12.042782 672811 start.go:402] Version: 0.1.0 RuntimeName: containerd RuntimeVersion: 1.4.4 RuntimeApiVersion: v1alpha2 I0507 22:41:12.042850 672811 ssh_runner.go:149] Run: containerd --version I0507 22:41:08.193527 666230 node_ready.go:58] node "kindnet-20210507224017-391940" has status "Ready":"False" I0507 22:41:10.195106 666230 node_ready.go:58] node "kindnet-20210507224017-391940" has status "Ready":"False" I0507 22:41:12.066863 672811 out.go:170] * Preparing Kubernetes v1.20.2 on containerd 1.4.4 ... I0507 22:41:12.066969 672811 cli_runner.go:115] Run: docker network inspect kubenet-20210507224052-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0507 22:41:12.105280 672811 ssh_runner.go:149] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts I0507 22:41:12.108647 672811 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0507 22:41:12.117548 672811 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/client.crt I0507 22:41:12.117660 672811 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/client.key I0507 22:41:12.117779 672811 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd I0507 22:41:12.117805 672811 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 I0507 22:41:12.117839 672811 ssh_runner.go:149] Run: sudo crictl images --output json I0507 22:41:12.139675 672811 containerd.go:571] all images are preloaded for containerd runtime. I0507 22:41:12.139694 672811 containerd.go:481] Images already preloaded, skipping extraction I0507 22:41:12.139737 672811 ssh_runner.go:149] Run: sudo crictl images --output json I0507 22:41:12.160780 672811 containerd.go:571] all images are preloaded for containerd runtime. I0507 22:41:12.160799 672811 cache_images.go:74] Images are preloaded, skipping loading I0507 22:41:12.160836 672811 ssh_runner.go:149] Run: sudo crictl info I0507 22:41:12.181806 672811 cni.go:89] network plugin configured as "kubenet", returning disabled I0507 22:41:12.181827 672811 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0507 22:41:12.181838 672811 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-20210507224052-391940 NodeName:kubenet-20210507224052-391940 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0507 22:41:12.181948 672811 kubeadm.go:157] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.58.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /run/containerd/containerd.sock name: "kubenet-20210507224052-391940" kubeletExtraArgs: node-ip: 192.168.58.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.58.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.20.2 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 I0507 22:41:12.182024 672811 kubeadm.go:901] kubelet [Unit] Wants=containerd.service [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubenet-20210507224052-391940 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=kubenet --node-ip=192.168.58.2 --pod-cidr=10.244.0.0/16 --runtime-request-timeout=15m [Install] config: {KubernetesVersion:v1.20.2 ClusterName:kubenet-20210507224052-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0507 22:41:12.182065 672811 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2 I0507 22:41:12.190005 672811 binaries.go:44] Found k8s binaries, skipping transfer I0507 22:41:12.190053 672811 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0507 22:41:12.196524 672811 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (572 bytes) I0507 22:41:12.208112 672811 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0507 22:41:12.219787 672811 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1868 bytes) I0507 22:41:12.234762 672811 ssh_runner.go:149] Run: grep 192.168.58.2 control-plane.minikube.internal$ /etc/hosts I0507 22:41:12.238162 672811 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0507 22:41:12.247659 672811 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940 for IP: 192.168.58.2 I0507 22:41:12.247732 672811 certs.go:171] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.key I0507 22:41:12.247761 672811 certs.go:171] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.key I0507 22:41:12.247864 672811 certs.go:282] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/client.key I0507 22:41:12.247917 672811 certs.go:286] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key.cee25041 I0507 22:41:12.247934 672811 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1] I0507 22:41:12.324253 672811 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt.cee25041 ... I0507 22:41:12.324281 672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt.cee25041: {Name:mk17a9fadc289bdd993cd89cf73f7e42a11db951 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:12.324441 672811 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key.cee25041 ... I0507 22:41:12.324457 672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key.cee25041: {Name:mk4f1b00ef492dfe1e4e53295535dd818e4b8776 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:12.324556 672811 certs.go:297] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt I0507 22:41:12.324624 672811 certs.go:301] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key I0507 22:41:12.324690 672811 certs.go:286] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.key I0507 22:41:12.324704 672811 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.crt with IP's: [] I0507 22:41:12.462717 672811 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.crt ... I0507 22:41:12.462741 672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.crt: {Name:mk3b377543768468ecb5ae6c2ac7692fea50fd9a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:12.462892 672811 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.key ... I0507 22:41:12.462906 672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.key: {Name:mkfe92c524b556c20012d8a91c085ac4bc69ff7a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:12.463104 672811 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940.pem (1338 bytes) W0507 22:41:12.463147 672811 certs.go:357] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940_empty.pem, impossibly tiny 0 bytes I0507 22:41:12.463164 672811 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem (1679 bytes) I0507 22:41:12.463201 672811 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem (1078 bytes) I0507 22:41:12.463240 672811 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem (1123 bytes) I0507 22:41:12.463276 672811 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/key.pem (1675 bytes) I0507 22:41:12.464251 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0507 22:41:12.481245 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0507 22:41:12.549535 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0507 22:41:12.567323 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0507 22:41:12.586572 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0507 22:41:12.605164 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0507 22:41:12.622859 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0507 22:41:12.639720 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0507 22:41:12.659044 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940.pem --> /usr/share/ca-certificates/391940.pem (1338 bytes) I0507 22:41:12.677161 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0507 22:41:12.693007 672811 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0507 22:41:12.704857 672811 ssh_runner.go:149] Run: openssl version I0507 22:41:12.709921 672811 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0507 22:41:12.717584 672811 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0507 22:41:12.720534 672811 certs.go:402] hashing: -rw-r--r-- 1 root root 1111 May 7 21:50 /usr/share/ca-certificates/minikubeCA.pem I0507 22:41:12.720581 672811 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0507 22:41:12.725167 672811 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0507 22:41:12.731804 672811 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391940.pem && ln -fs /usr/share/ca-certificates/391940.pem /etc/ssl/certs/391940.pem" I0507 22:41:12.738661 672811 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/391940.pem I0507 22:41:12.741622 672811 certs.go:402] hashing: -rw-r--r-- 1 root root 1338 May 7 21:57 /usr/share/ca-certificates/391940.pem I0507 22:41:12.741658 672811 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391940.pem I0507 22:41:12.746205 672811 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391940.pem /etc/ssl/certs/51391683.0" I0507 22:41:12.752891 672811 kubeadm.go:381] StartCluster: {Name:kubenet-20210507224052-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:kubenet-20210507224052-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0507 22:41:12.752980 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]} I0507 22:41:12.753082 672811 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system" I0507 22:41:12.775624 672811 cri.go:76] found id: "" I0507 22:41:12.775678 672811 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0507 22:41:12.781880 672811 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0507 22:41:12.788117 672811 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver I0507 22:41:12.788153 672811 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0507 22:41:12.794718 672811 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0507 22:41:12.794764 672811 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0507 22:41:09.464030 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:09.963853 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:10.463804 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:10.964279 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:11.463864 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:11.964595 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:12.464551 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:12.964405 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:13.464692 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:13.964375 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:12.695319 666230 node_ready.go:58] node "kindnet-20210507224017-391940" has status "Ready":"False" I0507 22:41:15.194523 666230 node_ready.go:58] node "kindnet-20210507224017-391940" has status "Ready":"False" I0507 22:41:15.694690 666230 node_ready.go:49] node "kindnet-20210507224017-391940" has status "Ready":"True" I0507 22:41:15.694718 666230 node_ready.go:38] duration metric: took 9.507990643s waiting for node "kindnet-20210507224017-391940" to be "Ready" ... I0507 22:41:15.694731 666230 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0507 22:41:15.704478 666230 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-z2xcz" in "kube-system" namespace to be "Ready" ... I0507 22:41:14.464494 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:14.964316 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:15.464752 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:15.964324 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:16.464135 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:16.964715 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:17.057780 668555 kubeadm.go:977] duration metric: took 15.483566448s to wait for elevateKubeSystemPrivileges. I0507 22:41:17.057810 668555 kubeadm.go:383] StartCluster complete in 32.505316012s I0507 22:41:17.057831 668555 settings.go:142] acquiring lock: {Name:mkbc12d45ea1a96167acb2e3885011008220fc1e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:17.057916 668555 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig I0507 22:41:17.059650 668555 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig: {Name:mk53c460e0a047a0806c95f27e36717b9bf9f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:17.576617 668555 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "bridge-20210507224024-391940" rescaled to 1 I0507 22:41:17.576672 668555 start.go:201] Will wait 5m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} W0507 22:41:17.576706 668555 out.go:424] no arguments passed for "* Verifying Kubernetes components...\n" - returning raw string W0507 22:41:17.576725 668555 out.go:424] no arguments passed for "* Verifying Kubernetes components...\n" - returning raw string I0507 22:41:17.579110 668555 out.go:170] * Verifying Kubernetes components... I0507 22:41:17.576756 668555 addons.go:328] enableAddons start: toEnable=map[], additional=[] I0507 22:41:17.579179 668555 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0507 22:41:17.579196 668555 addons.go:55] Setting storage-provisioner=true in profile "bridge-20210507224024-391940" I0507 22:41:17.579213 668555 addons.go:131] Setting addon storage-provisioner=true in "bridge-20210507224024-391940" I0507 22:41:17.576939 668555 cache.go:108] acquiring lock: {Name:mk66f3ed174a0fda2e3a4fd9a235ceef9553bc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0507 22:41:17.579238 668555 addons.go:55] Setting default-storageclass=true in profile "bridge-20210507224024-391940" W0507 22:41:17.579269 668555 addons.go:140] addon storage-provisioner should already be in state true I0507 22:41:17.579270 668555 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-20210507224024-391940" I0507 22:41:17.579289 668555 host.go:66] Checking if "bridge-20210507224024-391940" exists ... I0507 22:41:17.579309 668555 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 exists I0507 22:41:17.579331 668555 cache.go:97] cache image "minikube-local-cache-test:functional-20210507215728-391940" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940" took 2.400306ms I0507 22:41:17.579346 668555 cache.go:81] save to tar file minikube-local-cache-test:functional-20210507215728-391940 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 succeeded I0507 22:41:17.579357 668555 cache.go:88] Successfully saved all images to host disk. I0507 22:41:17.579696 668555 cli_runner.go:115] Run: docker container inspect bridge-20210507224024-391940 --format={{.State.Status}} I0507 22:41:17.579814 668555 cli_runner.go:115] Run: docker container inspect bridge-20210507224024-391940 --format={{.State.Status}} I0507 22:41:17.579898 668555 cli_runner.go:115] Run: docker container inspect bridge-20210507224024-391940 --format={{.State.Status}} I0507 22:41:17.600318 668555 node_ready.go:35] waiting up to 5m0s for node "bridge-20210507224024-391940" to be "Ready" ... I0507 22:41:17.608388 668555 node_ready.go:49] node "bridge-20210507224024-391940" has status "Ready":"True" I0507 22:41:17.608416 668555 node_ready.go:38] duration metric: took 8.056898ms waiting for node "bridge-20210507224024-391940" to be "Ready" ... I0507 22:41:17.608430 668555 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0507 22:41:17.638420 668555 ssh_runner.go:149] Run: sudo crictl images --output json I0507 22:41:17.638467 668555 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210507224024-391940 I0507 22:41:17.638881 668555 addons.go:131] Setting addon default-storageclass=true in "bridge-20210507224024-391940" W0507 22:41:17.638901 668555 addons.go:140] addon default-storageclass should already be in state true I0507 22:41:17.638916 668555 host.go:66] Checking if "bridge-20210507224024-391940" exists ... I0507 22:41:17.639424 668555 cli_runner.go:115] Run: docker container inspect bridge-20210507224024-391940 --format={{.State.Status}} I0507 22:41:17.640971 668555 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace to be "Ready" ... I0507 22:41:17.662334 668555 out.go:170] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0507 22:41:17.662472 668555 addons.go:261] installing /etc/kubernetes/addons/storage-provisioner.yaml I0507 22:41:17.662489 668555 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0507 22:41:17.662544 668555 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210507224024-391940 I0507 22:41:17.692641 668555 addons.go:261] installing /etc/kubernetes/addons/storageclass.yaml I0507 22:41:17.692676 668555 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0507 22:41:17.692747 668555 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210507224024-391940 I0507 22:41:17.699422 668555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33321 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/bridge-20210507224024-391940/id_rsa Username:docker} I0507 22:41:17.713302 668555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33321 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/bridge-20210507224024-391940/id_rsa Username:docker} I0507 22:41:17.747151 668555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33321 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/bridge-20210507224024-391940/id_rsa Username:docker} I0507 22:41:17.803615 668555 containerd.go:567] couldn't find preloaded image for "docker.io/minikube-local-cache-test:functional-20210507215728-391940". assuming images are not preloaded. I0507 22:41:17.803642 668555 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210507215728-391940] I0507 22:41:17.803698 668555 image.go:320] retrieving image: minikube-local-cache-test:functional-20210507215728-391940 I0507 22:41:17.803719 668555 image.go:326] checking repository: index.docker.io/library/minikube-local-cache-test I0507 22:41:17.805339 668555 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0507 22:41:17.835804 668555 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml W0507 22:41:18.046229 668555 image.go:333] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details) I0507 22:41:18.046283 668555 image.go:334] short name: minikube-local-cache-test:functional-20210507215728-391940 I0507 22:41:18.047249 668555 image.go:362] daemon lookup for minikube-local-cache-test:functional-20210507215728-391940: Error response from daemon: reference does not exist I0507 22:41:18.181662 668555 out.go:170] * Enabled addons: storage-provisioner, default-storageclass I0507 22:41:18.181686 668555 addons.go:330] enableAddons completed in 604.943ms W0507 22:41:18.200715 668555 image.go:372] authn lookup for minikube-local-cache-test:functional-20210507215728-391940 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:41:18.346656 668555 image.go:376] remote lookup for minikube-local-cache-test:functional-20210507215728-391940: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:41:18.346695 668555 image.go:98] error retrieve Image minikube-local-cache-test:functional-20210507215728-391940 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:41:18.346721 668555 cache_images.go:106] "minikube-local-cache-test:functional-20210507215728-391940" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210507215728-391940 I0507 22:41:18.346753 668555 cache_images.go:271] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:18.346827 668555 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:18.350194 668555 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940': No such file or directory I0507 22:41:18.350225 668555 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 (5120 bytes) I0507 22:41:18.367355 668555 containerd.go:267] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:18.367400 668555 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:18.477367 668555 cache_images.go:293] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 from cache I0507 22:41:18.477395 668555 cache_images.go:113] Successfully loaded all cached images I0507 22:41:18.477403 668555 cache_images.go:82] LoadImages completed in 673.752742ms I0507 22:41:18.477413 668555 cache_images.go:252] succeeded pushing to: bridge-20210507224024-391940 I0507 22:41:18.477417 668555 cache_images.go:253] failed pushing to: I0507 22:41:17.719681 666230 pod_ready.go:102] pod "coredns-74ff55c5b-z2xcz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-07 22:41:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime: InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]} I0507 22:41:19.721425 666230 pod_ready.go:102] pod "coredns-74ff55c5b-z2xcz" in "kube-system" namespace has status "Ready":"False" I0507 22:41:19.658657 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:21.659073 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:23.659297 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:22.220967 666230 pod_ready.go:102] pod "coredns-74ff55c5b-z2xcz" in "kube-system" namespace has status "Ready":"False" I0507 22:41:24.221731 666230 pod_ready.go:102] pod "coredns-74ff55c5b-z2xcz" in "kube-system" namespace has status "Ready":"False" I0507 22:41:26.720571 666230 pod_ready.go:102] pod "coredns-74ff55c5b-z2xcz" in "kube-system" namespace has status "Ready":"False" I0507 22:41:25.659482 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:27.659817 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" W0507 22:41:29.502582 672811 out.go:424] no arguments passed for " - Generating certificates and keys ..." - returning raw string W0507 22:41:29.502611 672811 out.go:424] no arguments passed for " - Generating certificates and keys ..." - returning raw string I0507 22:41:29.504051 672811 out.go:197] - Generating certificates and keys ... W0507 22:41:29.505275 672811 out.go:424] no arguments passed for " - Booting up control plane ..." - returning raw string W0507 22:41:29.505298 672811 out.go:424] no arguments passed for " - Booting up control plane ..." - returning raw string I0507 22:41:29.506842 672811 out.go:197] - Booting up control plane ... W0507 22:41:29.507828 672811 out.go:424] no arguments passed for " - Configuring RBAC rules ..." - returning raw string W0507 22:41:29.507851 672811 out.go:424] no arguments passed for " - Configuring RBAC rules ..." - returning raw string I0507 22:41:28.721492 666230 pod_ready.go:102] pod "coredns-74ff55c5b-z2xcz" in "kube-system" namespace has status "Ready":"False" I0507 22:41:29.720830 666230 pod_ready.go:92] pod "coredns-74ff55c5b-z2xcz" in "kube-system" namespace has status "Ready":"True" I0507 22:41:29.720855 666230 pod_ready.go:81] duration metric: took 14.016352375s waiting for pod "coredns-74ff55c5b-z2xcz" in "kube-system" namespace to be "Ready" ... I0507 22:41:29.720864 666230 pod_ready.go:78] waiting up to 5m0s for pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace to be "Ready" ... I0507 22:41:29.509381 672811 out.go:197] - Configuring RBAC rules ... I0507 22:41:29.511102 672811 cni.go:89] network plugin configured as "kubenet", returning disabled I0507 22:41:29.511144 672811 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0507 22:41:29.511202 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:29.511202 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=c31bd57f93d45726e4bd30607374f8c720e70e95 minikube.k8s.io/name=kubenet-20210507224052-391940 minikube.k8s.io/updated_at=2021_05_07T22_41_29_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:33.297935 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:36.412887 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:36.452032 672811 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (6.940764477s) I0507 22:41:36.452084 672811 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=c31bd57f93d45726e4bd30607374f8c720e70e95 minikube.k8s.io/name=kubenet-20210507224052-391940 minikube.k8s.io/updated_at=2021_05_07T22_41_29_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (6.940777019s) I0507 22:41:36.452120 672811 ssh_runner.go:189] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (6.9409632s) I0507 22:41:36.452130 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:36.452135 672811 ops.go:34] apiserver oom_adj: -16 I0507 22:41:37.133448 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:37.634119 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:36.415857 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:38.659853 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:38.730877 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:41.230841 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:38.134120 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:38.633786 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:39.133311 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:39.633524 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:40.134249 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:40.633580 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:41.133642 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:41.633685 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:42.133984 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:42.633334 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:41.158513 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:43.158956 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:45.895251 634245 system_pods.go:86] 7 kube-system pods found I0507 22:41:45.895286 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:41:45.895294 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:41:45.895300 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:41:45.895306 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:41:45.895313 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:41:45.895319 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:41:45.895324 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:41:45.895351 634245 retry.go:31] will retry after 37.970670965s: missing components: kube-dns I0507 22:41:43.730266 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:45.730693 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:43.133263 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:43.634078 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:44.133696 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:44.633466 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:45.133959 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:45.633643 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:46.133797 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:46.634042 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:47.133888 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:47.634155 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:45.159699 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:47.658653 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:48.133838 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:48.633584 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:49.134019 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:49.633305 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:50.133859 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:50.634269 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:51.133941 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:51.198474 672811 kubeadm.go:977] duration metric: took 21.687320394s to wait for elevateKubeSystemPrivileges. I0507 22:41:51.198504 672811 kubeadm.go:383] StartCluster complete in 38.445622759s I0507 22:41:51.198526 672811 settings.go:142] acquiring lock: {Name:mkbc12d45ea1a96167acb2e3885011008220fc1e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:51.198634 672811 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig I0507 22:41:51.201538 672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig: {Name:mk53c460e0a047a0806c95f27e36717b9bf9f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:51.718321 672811 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubenet-20210507224052-391940" rescaled to 1 I0507 22:41:51.718369 672811 start.go:201] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} W0507 22:41:51.718401 672811 out.go:424] no arguments passed for "* Verifying Kubernetes components...\n" - returning raw string W0507 22:41:51.718425 672811 out.go:424] no arguments passed for "* Verifying Kubernetes components...\n" - returning raw string I0507 22:41:51.720457 672811 out.go:170] * Verifying Kubernetes components... I0507 22:41:51.720524 672811 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0507 22:41:51.718471 672811 addons.go:328] enableAddons start: toEnable=map[], additional=[] I0507 22:41:51.720595 672811 addons.go:55] Setting storage-provisioner=true in profile "kubenet-20210507224052-391940" I0507 22:41:51.718753 672811 cache.go:108] acquiring lock: {Name:mk66f3ed174a0fda2e3a4fd9a235ceef9553bc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0507 22:41:51.720621 672811 addons.go:55] Setting default-storageclass=true in profile "kubenet-20210507224052-391940" I0507 22:41:51.720638 672811 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubenet-20210507224052-391940" I0507 22:41:51.720676 672811 addons.go:131] Setting addon storage-provisioner=true in "kubenet-20210507224052-391940" W0507 22:41:51.720694 672811 addons.go:140] addon storage-provisioner should already be in state true I0507 22:41:51.720700 672811 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 exists I0507 22:41:51.720716 672811 host.go:66] Checking if "kubenet-20210507224052-391940" exists ... I0507 22:41:51.720721 672811 cache.go:97] cache image "minikube-local-cache-test:functional-20210507215728-391940" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940" took 1.979419ms I0507 22:41:51.720737 672811 cache.go:81] save to tar file minikube-local-cache-test:functional-20210507215728-391940 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 succeeded I0507 22:41:51.720751 672811 cache.go:88] Successfully saved all images to host disk. I0507 22:41:51.721038 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}} I0507 22:41:51.721675 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}} I0507 22:41:51.721703 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}} I0507 22:41:51.740835 672811 node_ready.go:35] waiting up to 5m0s for node "kubenet-20210507224052-391940" to be "Ready" ... I0507 22:41:51.745077 672811 node_ready.go:49] node "kubenet-20210507224052-391940" has status "Ready":"True" I0507 22:41:51.745099 672811 node_ready.go:38] duration metric: took 4.233416ms waiting for node "kubenet-20210507224052-391940" to be "Ready" ... I0507 22:41:51.745110 672811 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0507 22:41:51.756875 672811 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace to be "Ready" ... I0507 22:41:51.783594 672811 addons.go:131] Setting addon default-storageclass=true in "kubenet-20210507224052-391940" W0507 22:41:51.783619 672811 addons.go:140] addon default-storageclass should already be in state true I0507 22:41:51.783637 672811 host.go:66] Checking if "kubenet-20210507224052-391940" exists ... I0507 22:41:51.784146 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}} I0507 22:41:51.788959 672811 ssh_runner.go:149] Run: sudo crictl images --output json I0507 22:41:51.789007 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:41:48.229607 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:50.230260 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:51.792078 672811 out.go:170] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0507 22:41:51.792203 672811 addons.go:261] installing /etc/kubernetes/addons/storage-provisioner.yaml I0507 22:41:51.792220 672811 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0507 22:41:51.792278 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:41:51.832922 672811 addons.go:261] installing /etc/kubernetes/addons/storageclass.yaml I0507 22:41:51.832950 672811 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0507 22:41:51.833006 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:41:51.843625 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:41:51.848427 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:41:51.881629 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:41:51.945739 672811 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0507 22:41:51.956581 672811 containerd.go:567] couldn't find preloaded image for "docker.io/minikube-local-cache-test:functional-20210507215728-391940". assuming images are not preloaded. I0507 22:41:51.956604 672811 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210507215728-391940] I0507 22:41:51.956650 672811 image.go:320] retrieving image: minikube-local-cache-test:functional-20210507215728-391940 I0507 22:41:51.956698 672811 image.go:326] checking repository: index.docker.io/library/minikube-local-cache-test I0507 22:41:51.972691 672811 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml W0507 22:41:52.183545 672811 image.go:333] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details) I0507 22:41:52.183604 672811 image.go:334] short name: minikube-local-cache-test:functional-20210507215728-391940 I0507 22:41:52.184655 672811 image.go:362] daemon lookup for minikube-local-cache-test:functional-20210507215728-391940: Error response from daemon: reference does not exist W0507 22:41:52.330654 672811 image.go:372] authn lookup for minikube-local-cache-test:functional-20210507215728-391940 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:41:52.347904 672811 out.go:170] * Enabled addons: storage-provisioner, default-storageclass I0507 22:41:52.347937 672811 addons.go:330] enableAddons completed in 629.490714ms I0507 22:41:52.481399 672811 image.go:376] remote lookup for minikube-local-cache-test:functional-20210507215728-391940: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:41:52.481439 672811 image.go:98] error retrieve Image minikube-local-cache-test:functional-20210507215728-391940 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:41:52.481470 672811 cache_images.go:106] "minikube-local-cache-test:functional-20210507215728-391940" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210507215728-391940 I0507 22:41:52.481491 672811 cache_images.go:271] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:52.481574 672811 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:52.485013 672811 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940': No such file or directory I0507 22:41:52.485041 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 (5120 bytes) I0507 22:41:52.502314 672811 containerd.go:267] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:52.502378 672811 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:52.612982 672811 cache_images.go:293] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 from cache I0507 22:41:52.613030 672811 cache_images.go:113] Successfully loaded all cached images I0507 22:41:52.613038 672811 cache_images.go:82] LoadImages completed in 656.425091ms I0507 22:41:52.613050 672811 cache_images.go:252] succeeded pushing to: kubenet-20210507224052-391940 I0507 22:41:52.613059 672811 cache_images.go:253] failed pushing to: I0507 22:41:49.658707 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:51.659079 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:53.659071 668555 pod_ready.go:92] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"True" I0507 22:41:53.659096 668555 pod_ready.go:81] duration metric: took 36.018090678s waiting for pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace to be "Ready" ... I0507 22:41:53.659109 668555 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace to be "Ready" ... I0507 22:41:52.231130 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:54.730057 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:56.730962 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:53.769362 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:41:55.770429 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:41:55.669145 668555 pod_ready.go:102] pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace has status "Ready":"False" I0507 22:41:57.669440 668555 pod_ready.go:102] pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace has status "Ready":"False" I0507 22:41:59.230483 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:42:01.732105 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:58.269524 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:00.269630 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:02.269711 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:00.168541 668555 pod_ready.go:102] pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace has status "Ready":"False" I0507 22:42:02.169036 668555 pod_ready.go:102] pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace has status "Ready":"False" I0507 22:42:04.230071 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:42:06.731034 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:42:04.774753 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:07.270739 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:04.169120 668555 pod_ready.go:102] pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace has status "Ready":"False" I0507 22:42:06.668156 668555 pod_ready.go:102] pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace has status "Ready":"False" I0507 22:42:08.668465 668555 pod_ready.go:102] pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace has status "Ready":"False" I0507 22:42:08.229893 666230 pod_ready.go:92] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:42:08.229920 666230 pod_ready.go:81] duration metric: took 38.509048055s waiting for pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:08.229938 666230 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-kindnet-20210507224017-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:08.234246 666230 pod_ready.go:92] pod "kube-apiserver-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:42:08.234268 666230 pod_ready.go:81] duration metric: took 4.3205ms waiting for pod "kube-apiserver-kindnet-20210507224017-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:08.234279 666230 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-kindnet-20210507224017-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:08.237969 666230 pod_ready.go:92] pod "kube-controller-manager-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:42:08.237985 666230 pod_ready.go:81] duration metric: took 3.697005ms waiting for pod "kube-controller-manager-kindnet-20210507224017-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:08.237994 666230 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-gdfcx" in "kube-system" namespace to be "Ready" ... I0507 22:42:08.241518 666230 pod_ready.go:92] pod "kube-proxy-gdfcx" in "kube-system" namespace has status "Ready":"True" I0507 22:42:08.241532 666230 pod_ready.go:81] duration metric: took 3.532307ms waiting for pod "kube-proxy-gdfcx" in "kube-system" namespace to be "Ready" ... I0507 22:42:08.241539 666230 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-kindnet-20210507224017-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:10.249511 666230 pod_ready.go:92] pod "kube-scheduler-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:42:10.249539 666230 pod_ready.go:81] duration metric: took 2.007992228s waiting for pod "kube-scheduler-kindnet-20210507224017-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:10.249553 666230 pod_ready.go:38] duration metric: took 54.554803875s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0507 22:42:10.249590 666230 api_server.go:50] waiting for apiserver process to appear ... I0507 22:42:10.249619 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]} I0507 22:42:10.249671 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0507 22:42:10.274262 666230 cri.go:76] found id: "b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f" I0507 22:42:10.274292 666230 cri.go:76] found id: "" I0507 22:42:10.274299 666230 logs.go:270] 1 containers: [b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f] I0507 22:42:10.274342 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:10.277437 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]} I0507 22:42:10.277503 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd I0507 22:42:10.298859 666230 cri.go:76] found id: "28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460" I0507 22:42:10.298880 666230 cri.go:76] found id: "" I0507 22:42:10.298888 666230 logs.go:270] 1 containers: [28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460] I0507 22:42:10.298941 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:10.301705 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]} I0507 22:42:10.301780 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns I0507 22:42:10.322564 666230 cri.go:76] found id: "e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618" I0507 22:42:10.322584 666230 cri.go:76] found id: "" I0507 22:42:10.322592 666230 logs.go:270] 1 containers: [e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618] I0507 22:42:10.322631 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:10.325329 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]} I0507 22:42:10.325371 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0507 22:42:10.345651 666230 cri.go:76] found id: "2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39" I0507 22:42:10.345673 666230 cri.go:76] found id: "" I0507 22:42:10.345680 666230 logs.go:270] 1 containers: [2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39] I0507 22:42:10.345712 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:10.348402 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]} I0507 22:42:10.348458 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy I0507 22:42:10.368647 666230 cri.go:76] found id: "aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3" I0507 22:42:10.368666 666230 cri.go:76] found id: "" I0507 22:42:10.368671 666230 logs.go:270] 1 containers: [aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3] I0507 22:42:10.368702 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:10.371259 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]} I0507 22:42:10.371312 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0507 22:42:10.391161 666230 cri.go:76] found id: "" I0507 22:42:10.391182 666230 logs.go:270] 0 containers: [] W0507 22:42:10.391192 666230 logs.go:272] No container was found matching "kubernetes-dashboard" I0507 22:42:10.391199 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]} I0507 22:42:10.391241 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0507 22:42:10.412101 666230 cri.go:76] found id: "840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9" I0507 22:42:10.412122 666230 cri.go:76] found id: "" I0507 22:42:10.412128 666230 logs.go:270] 1 containers: [840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9] I0507 22:42:10.412163 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:10.414725 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]} I0507 22:42:10.414791 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0507 22:42:10.434644 666230 cri.go:76] found id: "953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5" I0507 22:42:10.434663 666230 cri.go:76] found id: "" I0507 22:42:10.434668 666230 logs.go:270] 1 containers: [953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5] I0507 22:42:10.434700 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:10.437410 666230 logs.go:123] Gathering logs for coredns [e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618] ... I0507 22:42:10.437431 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618" I0507 22:42:10.458525 666230 logs.go:123] Gathering logs for kube-scheduler [2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39] ... I0507 22:42:10.458559 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39" I0507 22:42:10.481718 666230 logs.go:123] Gathering logs for storage-provisioner [840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9] ... I0507 22:42:10.481741 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9" I0507 22:42:10.502806 666230 logs.go:123] Gathering logs for kube-controller-manager [953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5] ... I0507 22:42:10.502827 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5" I0507 22:42:10.544428 666230 logs.go:123] Gathering logs for kubelet ... I0507 22:42:10.544453 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0507 22:42:10.597686 666230 logs.go:123] Gathering logs for dmesg ... I0507 22:42:10.597719 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0507 22:42:10.621630 666230 logs.go:123] Gathering logs for describe nodes ... I0507 22:42:10.621655 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0507 22:42:10.715373 666230 logs.go:123] Gathering logs for etcd [28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460] ... I0507 22:42:10.715412 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460" I0507 22:42:10.744332 666230 logs.go:123] Gathering logs for containerd ... I0507 22:42:10.744360 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400" I0507 22:42:10.782615 666230 logs.go:123] Gathering logs for container status ... I0507 22:42:10.782646 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0507 22:42:10.808422 666230 logs.go:123] Gathering logs for kube-apiserver [b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f] ... I0507 22:42:10.808450 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f" I0507 22:42:10.842939 666230 logs.go:123] Gathering logs for kube-proxy [aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3] ... I0507 22:42:10.842968 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3" I0507 22:42:09.770268 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:12.270101 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:10.668771 668555 pod_ready.go:102] pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace has status "Ready":"False" I0507 22:42:13.169561 668555 pod_ready.go:102] pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace has status "Ready":"False" I0507 22:42:13.366885 666230 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0507 22:42:13.388981 666230 api_server.go:70] duration metric: took 1m7.219905852s to wait for apiserver process to appear ... I0507 22:42:13.389006 666230 api_server.go:86] waiting for apiserver healthz status ... I0507 22:42:13.389035 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]} I0507 22:42:13.389087 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0507 22:42:13.411483 666230 cri.go:76] found id: "b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f" I0507 22:42:13.411514 666230 cri.go:76] found id: "" I0507 22:42:13.411526 666230 logs.go:270] 1 containers: [b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f] I0507 22:42:13.411571 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:13.414370 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]} I0507 22:42:13.414418 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd I0507 22:42:13.435282 666230 cri.go:76] found id: "28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460" I0507 22:42:13.435303 666230 cri.go:76] found id: "" I0507 22:42:13.435310 666230 logs.go:270] 1 containers: [28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460] I0507 22:42:13.435357 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:13.438094 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]} I0507 22:42:13.438144 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns I0507 22:42:13.459295 666230 cri.go:76] found id: "e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618" I0507 22:42:13.459312 666230 cri.go:76] found id: "" I0507 22:42:13.459318 666230 logs.go:270] 1 containers: [e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618] I0507 22:42:13.459351 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:13.462157 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]} I0507 22:42:13.462204 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0507 22:42:13.482519 666230 cri.go:76] found id: "2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39" I0507 22:42:13.482541 666230 cri.go:76] found id: "" I0507 22:42:13.482548 666230 logs.go:270] 1 containers: [2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39] I0507 22:42:13.482588 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:13.485169 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]} I0507 22:42:13.485219 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy I0507 22:42:13.504984 666230 cri.go:76] found id: "aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3" I0507 22:42:13.505005 666230 cri.go:76] found id: "" I0507 22:42:13.505013 666230 logs.go:270] 1 containers: [aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3] I0507 22:42:13.505051 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:13.507814 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]} I0507 22:42:13.507868 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0507 22:42:13.528188 666230 cri.go:76] found id: "" I0507 22:42:13.528205 666230 logs.go:270] 0 containers: [] W0507 22:42:13.528211 666230 logs.go:272] No container was found matching "kubernetes-dashboard" I0507 22:42:13.528218 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]} I0507 22:42:13.528269 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0507 22:42:13.548898 666230 cri.go:76] found id: "840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9" I0507 22:42:13.548939 666230 cri.go:76] found id: "" I0507 22:42:13.548946 666230 logs.go:270] 1 containers: [840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9] I0507 22:42:13.548982 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:13.551706 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]} I0507 22:42:13.551780 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0507 22:42:13.572473 666230 cri.go:76] found id: "953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5" I0507 22:42:13.572493 666230 cri.go:76] found id: "" I0507 22:42:13.572503 666230 logs.go:270] 1 containers: [953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5] I0507 22:42:13.572538 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:13.575210 666230 logs.go:123] Gathering logs for kube-scheduler [2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39] ... I0507 22:42:13.575230 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39" I0507 22:42:13.599799 666230 logs.go:123] Gathering logs for describe nodes ... I0507 22:42:13.599821 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0507 22:42:13.686130 666230 logs.go:123] Gathering logs for kube-apiserver [b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f] ... I0507 22:42:13.686156 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f" I0507 22:42:13.723776 666230 logs.go:123] Gathering logs for coredns [e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618] ... I0507 22:42:13.723804 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618" I0507 22:42:13.746725 666230 logs.go:123] Gathering logs for kube-proxy [aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3] ... I0507 22:42:13.746749 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3" I0507 22:42:13.772353 666230 logs.go:123] Gathering logs for storage-provisioner [840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9] ... I0507 22:42:13.772379 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9" I0507 22:42:13.795023 666230 logs.go:123] Gathering logs for kube-controller-manager [953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5] ... I0507 22:42:13.795049 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5" I0507 22:42:13.836297 666230 logs.go:123] Gathering logs for containerd ... I0507 22:42:13.836322 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400" I0507 22:42:13.869424 666230 logs.go:123] Gathering logs for container status ... I0507 22:42:13.869455 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0507 22:42:13.895634 666230 logs.go:123] Gathering logs for kubelet ... I0507 22:42:13.895659 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0507 22:42:13.949046 666230 logs.go:123] Gathering logs for dmesg ... I0507 22:42:13.949069 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0507 22:42:13.970628 666230 logs.go:123] Gathering logs for etcd [28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460] ... I0507 22:42:13.970651 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460" I0507 22:42:16.499411 666230 api_server.go:223] Checking apiserver healthz at https://192.168.76.2:8443/healthz ... I0507 22:42:16.504744 666230 api_server.go:249] https://192.168.76.2:8443/healthz returned 200: ok I0507 22:42:16.505727 666230 api_server.go:139] control plane version: v1.20.2 I0507 22:42:16.505755 666230 api_server.go:129] duration metric: took 3.116741389s to wait for apiserver health ... I0507 22:42:16.505765 666230 system_pods.go:43] waiting for kube-system pods to appear ... I0507 22:42:16.505792 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]} I0507 22:42:16.505848 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0507 22:42:16.529238 666230 cri.go:76] found id: "b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f" I0507 22:42:16.529260 666230 cri.go:76] found id: "" I0507 22:42:16.529267 666230 logs.go:270] 1 containers: [b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f] I0507 22:42:16.529316 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:16.532427 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]} I0507 22:42:16.532482 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd I0507 22:42:16.553627 666230 cri.go:76] found id: "28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460" I0507 22:42:16.553647 666230 cri.go:76] found id: "" I0507 22:42:16.553653 666230 logs.go:270] 1 containers: [28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460] I0507 22:42:16.553704 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:16.556501 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]} I0507 22:42:16.556558 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns I0507 22:42:16.577745 666230 cri.go:76] found id: "e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618" I0507 22:42:16.577767 666230 cri.go:76] found id: "" I0507 22:42:16.577774 666230 logs.go:270] 1 containers: [e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618] I0507 22:42:16.577811 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:16.580607 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]} I0507 22:42:16.580664 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0507 22:42:16.601257 666230 cri.go:76] found id: "2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39" I0507 22:42:16.601276 666230 cri.go:76] found id: "" I0507 22:42:16.601283 666230 logs.go:270] 1 containers: [2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39] I0507 22:42:16.601322 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:16.604118 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]} I0507 22:42:16.604179 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy I0507 22:42:16.625270 666230 cri.go:76] found id: "aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3" I0507 22:42:16.625287 666230 cri.go:76] found id: "" I0507 22:42:16.625295 666230 logs.go:270] 1 containers: [aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3] I0507 22:42:16.625335 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:16.628041 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]} I0507 22:42:16.628106 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0507 22:42:16.649884 666230 cri.go:76] found id: "" I0507 22:42:16.649905 666230 logs.go:270] 0 containers: [] W0507 22:42:16.649913 666230 logs.go:272] No container was found matching "kubernetes-dashboard" I0507 22:42:16.649930 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]} I0507 22:42:16.649977 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0507 22:42:16.674957 666230 cri.go:76] found id: "840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9" I0507 22:42:16.674976 666230 cri.go:76] found id: "" I0507 22:42:16.674983 666230 logs.go:270] 1 containers: [840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9] I0507 22:42:16.675021 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:16.678054 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]} I0507 22:42:16.678109 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0507 22:42:16.699657 666230 cri.go:76] found id: "953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5" I0507 22:42:16.699673 666230 cri.go:76] found id: "" I0507 22:42:16.699679 666230 logs.go:270] 1 containers: [953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5] I0507 22:42:16.699723 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:16.702335 666230 logs.go:123] Gathering logs for etcd [28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460] ... I0507 22:42:16.702360 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460" I0507 22:42:16.730493 666230 logs.go:123] Gathering logs for kube-proxy [aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3] ... I0507 22:42:16.730528 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3" I0507 22:42:16.758194 666230 logs.go:123] Gathering logs for kube-controller-manager [953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5] ... I0507 22:42:16.758218 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5" I0507 22:42:16.804178 666230 logs.go:123] Gathering logs for container status ... I0507 22:42:16.804206 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0507 22:42:16.829372 666230 logs.go:123] Gathering logs for dmesg ... I0507 22:42:16.829402 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0507 22:42:16.851983 666230 logs.go:123] Gathering logs for describe nodes ... I0507 22:42:16.852012 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0507 22:42:16.951387 666230 logs.go:123] Gathering logs for kube-apiserver [b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f] ... I0507 22:42:16.951415 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f" I0507 22:42:16.990994 666230 logs.go:123] Gathering logs for coredns [e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618] ... I0507 22:42:16.991027 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618" I0507 22:42:17.013386 666230 logs.go:123] Gathering logs for kube-scheduler [2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39] ... I0507 22:42:17.013418 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39" I0507 22:42:17.038584 666230 logs.go:123] Gathering logs for storage-provisioner [840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9] ... I0507 22:42:17.038612 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9" I0507 22:42:17.060650 666230 logs.go:123] Gathering logs for containerd ... I0507 22:42:17.060673 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400" I0507 22:42:17.093614 666230 logs.go:123] Gathering logs for kubelet ... I0507 22:42:17.093640 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0507 22:42:14.769932 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:17.269374 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:15.669185 668555 pod_ready.go:102] pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace has status "Ready":"False" I0507 22:42:17.166937 668555 pod_ready.go:97] error getting pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-wdngz" not found I0507 22:42:17.166970 668555 pod_ready.go:81] duration metric: took 23.507854097s waiting for pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace to be "Ready" ... E0507 22:42:17.166982 668555 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-wdngz" not found I0507 22:42:17.166991 668555 pod_ready.go:78] waiting up to 5m0s for pod "etcd-bridge-20210507224024-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:19.658416 666230 system_pods.go:59] 8 kube-system pods found I0507 22:42:19.658462 666230 system_pods.go:61] "coredns-74ff55c5b-z2xcz" [32f270e2-76b8-461c-b5ad-4a27412fdfc0] Running I0507 22:42:19.658468 666230 system_pods.go:61] "etcd-kindnet-20210507224017-391940" [8aa1fb09-6fc9-49e5-bde4-381bd5c8b572] Running I0507 22:42:19.658473 666230 system_pods.go:61] "kindnet-q67jp" [fa4108a6-8fc0-4ba5-ba81-ea32d753a85a] Running I0507 22:42:19.658478 666230 system_pods.go:61] "kube-apiserver-kindnet-20210507224017-391940" [40d5124a-6495-4040-9c07-a81af5d89ccb] Running I0507 22:42:19.658493 666230 system_pods.go:61] "kube-controller-manager-kindnet-20210507224017-391940" [9525d6cc-6900-471f-bd5b-7d5bc17f7ddc] Running I0507 22:42:19.658501 666230 system_pods.go:61] "kube-proxy-gdfcx" [8a5c1984-a141-4ab0-ae51-fd74fda2c5db] Running I0507 22:42:19.658506 666230 system_pods.go:61] "kube-scheduler-kindnet-20210507224017-391940" [daffa333-07f9-4c17-9430-fb63e656f748] Running I0507 22:42:19.658512 666230 system_pods.go:61] "storage-provisioner" [efd5252a-5fd4-481b-9795-a34a2030d342] Running I0507 22:42:19.658517 666230 system_pods.go:74] duration metric: took 3.152746713s to wait for pod list to return data ... I0507 22:42:19.658527 666230 default_sa.go:34] waiting for default service account to be created ... I0507 22:42:19.660861 666230 default_sa.go:45] found service account: "default" I0507 22:42:19.660881 666230 default_sa.go:55] duration metric: took 2.3459ms for default service account to be created ... I0507 22:42:19.660890 666230 system_pods.go:116] waiting for k8s-apps to be running ... I0507 22:42:19.665119 666230 system_pods.go:86] 8 kube-system pods found I0507 22:42:19.665143 666230 system_pods.go:89] "coredns-74ff55c5b-z2xcz" [32f270e2-76b8-461c-b5ad-4a27412fdfc0] Running I0507 22:42:19.665149 666230 system_pods.go:89] "etcd-kindnet-20210507224017-391940" [8aa1fb09-6fc9-49e5-bde4-381bd5c8b572] Running I0507 22:42:19.665155 666230 system_pods.go:89] "kindnet-q67jp" [fa4108a6-8fc0-4ba5-ba81-ea32d753a85a] Running I0507 22:42:19.665162 666230 system_pods.go:89] "kube-apiserver-kindnet-20210507224017-391940" [40d5124a-6495-4040-9c07-a81af5d89ccb] Running I0507 22:42:19.665169 666230 system_pods.go:89] "kube-controller-manager-kindnet-20210507224017-391940" [9525d6cc-6900-471f-bd5b-7d5bc17f7ddc] Running I0507 22:42:19.665174 666230 system_pods.go:89] "kube-proxy-gdfcx" [8a5c1984-a141-4ab0-ae51-fd74fda2c5db] Running I0507 22:42:19.665181 666230 system_pods.go:89] "kube-scheduler-kindnet-20210507224017-391940" [daffa333-07f9-4c17-9430-fb63e656f748] Running I0507 22:42:19.665191 666230 system_pods.go:89] "storage-provisioner" [efd5252a-5fd4-481b-9795-a34a2030d342] Running I0507 22:42:19.665197 666230 system_pods.go:126] duration metric: took 4.302486ms to wait for k8s-apps to be running ... I0507 22:42:19.665207 666230 system_svc.go:44] waiting for kubelet service to be running .... I0507 22:42:19.665246 666230 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0507 22:42:19.675849 666230 system_svc.go:56] duration metric: took 10.634792ms WaitForService to wait for kubelet. I0507 22:42:19.675872 666230 kubeadm.go:538] duration metric: took 1m13.506802153s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ... I0507 22:42:19.675900 666230 node_conditions.go:102] verifying NodePressure condition ... I0507 22:42:19.679056 666230 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki I0507 22:42:19.679087 666230 node_conditions.go:123] node cpu capacity is 8 I0507 22:42:19.679106 666230 node_conditions.go:105] duration metric: took 3.19959ms to run NodePressure ... I0507 22:42:19.679119 666230 start.go:206] waiting for startup goroutines ... I0507 22:42:19.723166 666230 start.go:460] kubectl: 1.20.5, cluster: 1.20.2 (minor skew: 0) I0507 22:42:19.725682 666230 out.go:170] * Done! kubectl is now configured to use "kindnet-20210507224017-391940" cluster and "default" namespace by default I0507 22:42:19.770111 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:22.269528 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:19.176854 668555 pod_ready.go:102] pod "etcd-bridge-20210507224024-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:42:21.177400 668555 pod_ready.go:102] pod "etcd-bridge-20210507224024-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:42:23.676907 668555 pod_ready.go:102] pod "etcd-bridge-20210507224024-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:42:23.871426 634245 system_pods.go:86] 7 kube-system pods found I0507 22:42:23.871460 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:42:23.871466 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:42:23.871472 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:42:23.871476 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:42:23.871481 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:42:23.871485 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:42:23.871489 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:42:23.871513 634245 retry.go:31] will retry after 47.568379235s: missing components: kube-dns I0507 22:42:24.769876 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:27.269737 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:25.677317 668555 pod_ready.go:102] pod "etcd-bridge-20210507224024-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:42:28.176609 668555 pod_ready.go:92] pod "etcd-bridge-20210507224024-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:42:28.176637 668555 pod_ready.go:81] duration metric: took 11.009627545s waiting for pod "etcd-bridge-20210507224024-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:28.176650 668555 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-bridge-20210507224024-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:28.180369 668555 pod_ready.go:92] pod "kube-apiserver-bridge-20210507224024-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:42:28.180384 668555 pod_ready.go:81] duration metric: took 3.725861ms waiting for pod "kube-apiserver-bridge-20210507224024-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:28.180393 668555 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-bridge-20210507224024-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:29.772420 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:32.269539 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:30.189339 668555 pod_ready.go:102] pod "kube-controller-manager-bridge-20210507224024-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:42:32.189604 668555 pod_ready.go:102] pod "kube-controller-manager-bridge-20210507224024-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:42:34.269995 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:36.769591 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:34.189670 668555 pod_ready.go:102] pod "kube-controller-manager-bridge-20210507224024-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:42:35.190239 668555 pod_ready.go:92] pod "kube-controller-manager-bridge-20210507224024-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:42:35.190267 668555 pod_ready.go:81] duration metric: took 7.009866995s waiting for pod "kube-controller-manager-bridge-20210507224024-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:35.190278 668555 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-ws99c" in "kube-system" namespace to be "Ready" ... I0507 22:42:35.194565 668555 pod_ready.go:92] pod "kube-proxy-ws99c" in "kube-system" namespace has status "Ready":"True" I0507 22:42:35.194581 668555 pod_ready.go:81] duration metric: took 4.296698ms waiting for pod "kube-proxy-ws99c" in "kube-system" namespace to be "Ready" ... I0507 22:42:35.194590 668555 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-bridge-20210507224024-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:35.198344 668555 pod_ready.go:92] pod "kube-scheduler-bridge-20210507224024-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:42:35.198364 668555 pod_ready.go:81] duration metric: took 3.766536ms waiting for pod "kube-scheduler-bridge-20210507224024-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:35.198378 668555 pod_ready.go:38] duration metric: took 1m17.589933355s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0507 22:42:35.198402 668555 api_server.go:50] waiting for apiserver process to appear ... I0507 22:42:35.198426 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]} I0507 22:42:35.198477 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0507 22:42:35.226005 668555 cri.go:76] found id: "6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef" I0507 22:42:35.226042 668555 cri.go:76] found id: "" I0507 22:42:35.226050 668555 logs.go:270] 1 containers: [6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef] I0507 22:42:35.226104 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:35.229189 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]} I0507 22:42:35.229258 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd I0507 22:42:35.254495 668555 cri.go:76] found id: "e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7" I0507 22:42:35.254531 668555 cri.go:76] found id: "" I0507 22:42:35.254540 668555 logs.go:270] 1 containers: [e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7] I0507 22:42:35.254607 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:35.257794 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]} I0507 22:42:35.257871 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns I0507 22:42:35.281886 668555 cri.go:76] found id: "2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead" I0507 22:42:35.281909 668555 cri.go:76] found id: "" I0507 22:42:35.281916 668555 logs.go:270] 1 containers: [2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead] I0507 22:42:35.281955 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:35.284915 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]} I0507 22:42:35.284966 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0507 22:42:35.313845 668555 cri.go:76] found id: "41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a" I0507 22:42:35.313930 668555 cri.go:76] found id: "" I0507 22:42:35.313937 668555 logs.go:270] 1 containers: [41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a] I0507 22:42:35.313999 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:35.318156 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]} I0507 22:42:35.318222 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy I0507 22:42:35.342057 668555 cri.go:76] found id: "bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163" I0507 22:42:35.342091 668555 cri.go:76] found id: "" I0507 22:42:35.342099 668555 logs.go:270] 1 containers: [bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163] I0507 22:42:35.342146 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:35.345153 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]} I0507 22:42:35.345219 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0507 22:42:35.368430 668555 cri.go:76] found id: "" I0507 22:42:35.368448 668555 logs.go:270] 0 containers: [] W0507 22:42:35.368454 668555 logs.go:272] No container was found matching "kubernetes-dashboard" I0507 22:42:35.368460 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]} I0507 22:42:35.368508 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0507 22:42:35.389241 668555 cri.go:76] found id: "ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac" I0507 22:42:35.389258 668555 cri.go:76] found id: "" I0507 22:42:35.389266 668555 logs.go:270] 1 containers: [ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac] I0507 22:42:35.389314 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:35.392216 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]} I0507 22:42:35.392265 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0507 22:42:35.413738 668555 cri.go:76] found id: "61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf" I0507 22:42:35.413759 668555 cri.go:76] found id: "" I0507 22:42:35.413765 668555 logs.go:270] 1 containers: [61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf] I0507 22:42:35.413808 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:35.416697 668555 logs.go:123] Gathering logs for kube-apiserver [6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef] ... I0507 22:42:35.416715 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef" I0507 22:42:35.450753 668555 logs.go:123] Gathering logs for etcd [e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7] ... I0507 22:42:35.450782 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7" I0507 22:42:35.476276 668555 logs.go:123] Gathering logs for kube-proxy [bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163] ... I0507 22:42:35.476299 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163" I0507 22:42:35.498825 668555 logs.go:123] Gathering logs for storage-provisioner [ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac] ... I0507 22:42:35.498853 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac" I0507 22:42:35.521010 668555 logs.go:123] Gathering logs for kube-controller-manager [61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf] ... I0507 22:42:35.521032 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf" I0507 22:42:35.552384 668555 logs.go:123] Gathering logs for containerd ... I0507 22:42:35.552410 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400" I0507 22:42:35.585101 668555 logs.go:123] Gathering logs for container status ... I0507 22:42:35.585126 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0507 22:42:35.608966 668555 logs.go:123] Gathering logs for dmesg ... I0507 22:42:35.608989 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0507 22:42:35.629842 668555 logs.go:123] Gathering logs for describe nodes ... I0507 22:42:35.629862 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0507 22:42:35.716415 668555 logs.go:123] Gathering logs for coredns [2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead] ... I0507 22:42:35.716445 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead" I0507 22:42:35.741527 668555 logs.go:123] Gathering logs for kube-scheduler [41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a] ... I0507 22:42:35.741555 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a" I0507 22:42:35.770205 668555 logs.go:123] Gathering logs for kubelet ... I0507 22:42:35.770236 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" W0507 22:42:35.822573 668555 logs.go:138] Found kubelet problem: May 07 22:41:17 bridge-20210507224024-391940 kubelet[1180]: E0507 22:41:17.449069 1180 pod_workers.go:191] Error syncing pod fdcac5df-b94e-462a-87d5-98af36032464 ("coredns-74ff55c5b-wdngz_kube-system(fdcac5df-b94e-462a-87d5-98af36032464)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "cannot find volume \"config-volume\" to mount into container \"coredns\"" I0507 22:42:35.823008 668555 out.go:304] Setting ErrFile to fd 2... I0507 22:42:35.823023 668555 out.go:338] TERM=,COLORTERM=, which probably does not support color W0507 22:42:35.823131 668555 out.go:235] X Problems detected in kubelet: W0507 22:42:35.823143 668555 out.go:424] no arguments passed for " May 07 22:41:17 bridge-20210507224024-391940 kubelet[1180]: E0507 22:41:17.449069 1180 pod_workers.go:191] Error syncing pod fdcac5df-b94e-462a-87d5-98af36032464 (\"coredns-74ff55c5b-wdngz_kube-system(fdcac5df-b94e-462a-87d5-98af36032464)\"), skipping: failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"cannot find volume \\\"config-volume\\\" to mount into container \\\"coredns\\\"\"\n" - returning raw string W0507 22:42:35.823158 668555 out.go:235] May 07 22:41:17 bridge-20210507224024-391940 kubelet[1180]: E0507 22:41:17.449069 1180 pod_workers.go:191] Error syncing pod fdcac5df-b94e-462a-87d5-98af36032464 ("coredns-74ff55c5b-wdngz_kube-system(fdcac5df-b94e-462a-87d5-98af36032464)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "cannot find volume \"config-volume\" to mount into container \"coredns\"" I0507 22:42:35.823166 668555 out.go:304] Setting ErrFile to fd 2... I0507 22:42:35.823171 668555 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:42:38.770070 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:40.771262 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:43.269652 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:45.769197 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:45.824690 668555 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0507 22:42:45.846628 668555 api_server.go:70] duration metric: took 1m28.269918689s to wait for apiserver process to appear ... I0507 22:42:45.846659 668555 api_server.go:86] waiting for apiserver healthz status ... I0507 22:42:45.846689 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]} I0507 22:42:45.846772 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0507 22:42:45.869042 668555 cri.go:76] found id: "6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef" I0507 22:42:45.869067 668555 cri.go:76] found id: "" I0507 22:42:45.869075 668555 logs.go:270] 1 containers: [6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef] I0507 22:42:45.869120 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:45.872040 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]} I0507 22:42:45.872102 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd I0507 22:42:45.893298 668555 cri.go:76] found id: "e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7" I0507 22:42:45.893316 668555 cri.go:76] found id: "" I0507 22:42:45.893322 668555 logs.go:270] 1 containers: [e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7] I0507 22:42:45.893356 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:45.896044 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]} I0507 22:42:45.896103 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns I0507 22:42:45.916750 668555 cri.go:76] found id: "2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead" I0507 22:42:45.916770 668555 cri.go:76] found id: "" I0507 22:42:45.916775 668555 logs.go:270] 1 containers: [2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead] I0507 22:42:45.916812 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:45.919401 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]} I0507 22:42:45.919452 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0507 22:42:45.939392 668555 cri.go:76] found id: "41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a" I0507 22:42:45.939409 668555 cri.go:76] found id: "" I0507 22:42:45.939415 668555 logs.go:270] 1 containers: [41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a] I0507 22:42:45.939454 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:45.942192 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]} I0507 22:42:45.942246 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy I0507 22:42:45.962229 668555 cri.go:76] found id: "bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163" I0507 22:42:45.962248 668555 cri.go:76] found id: "" I0507 22:42:45.962254 668555 logs.go:270] 1 containers: [bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163] I0507 22:42:45.962284 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:45.964904 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]} I0507 22:42:45.964949 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0507 22:42:45.987488 668555 cri.go:76] found id: "" I0507 22:42:45.987539 668555 logs.go:270] 0 containers: [] W0507 22:42:45.987547 668555 logs.go:272] No container was found matching "kubernetes-dashboard" I0507 22:42:45.987555 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]} I0507 22:42:45.987600 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0507 22:42:46.007636 668555 cri.go:76] found id: "ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac" I0507 22:42:46.007652 668555 cri.go:76] found id: "" I0507 22:42:46.007658 668555 logs.go:270] 1 containers: [ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac] I0507 22:42:46.007691 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:46.010278 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]} I0507 22:42:46.010322 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0507 22:42:46.031247 668555 cri.go:76] found id: "61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf" I0507 22:42:46.031268 668555 cri.go:76] found id: "" I0507 22:42:46.031274 668555 logs.go:270] 1 containers: [61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf] I0507 22:42:46.031346 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:46.034072 668555 logs.go:123] Gathering logs for coredns [2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead] ... I0507 22:42:46.034107 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead" I0507 22:42:46.055825 668555 logs.go:123] Gathering logs for kube-proxy [bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163] ... I0507 22:42:46.055847 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163" I0507 22:42:46.077653 668555 logs.go:123] Gathering logs for storage-provisioner [ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac] ... I0507 22:42:46.077677 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac" I0507 22:42:46.099254 668555 logs.go:123] Gathering logs for kube-controller-manager [61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf] ... I0507 22:42:46.099276 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf" I0507 22:42:46.131389 668555 logs.go:123] Gathering logs for container status ... I0507 22:42:46.131414 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0507 22:42:46.155297 668555 logs.go:123] Gathering logs for kubelet ... I0507 22:42:46.155319 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" W0507 22:42:46.210050 668555 logs.go:138] Found kubelet problem: May 07 22:41:17 bridge-20210507224024-391940 kubelet[1180]: E0507 22:41:17.449069 1180 pod_workers.go:191] Error syncing pod fdcac5df-b94e-462a-87d5-98af36032464 ("coredns-74ff55c5b-wdngz_kube-system(fdcac5df-b94e-462a-87d5-98af36032464)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "cannot find volume \"config-volume\" to mount into container \"coredns\"" I0507 22:42:46.210693 668555 logs.go:123] Gathering logs for describe nodes ... I0507 22:42:46.210710 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0507 22:42:46.297248 668555 logs.go:123] Gathering logs for kube-apiserver [6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef] ... I0507 22:42:46.297281 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef" I0507 22:42:46.333028 668555 logs.go:123] Gathering logs for containerd ... I0507 22:42:46.333055 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400" I0507 22:42:46.364655 668555 logs.go:123] Gathering logs for dmesg ... I0507 22:42:46.364684 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0507 22:42:46.385640 668555 logs.go:123] Gathering logs for etcd [e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7] ... I0507 22:42:46.385664 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7" I0507 22:42:46.411628 668555 logs.go:123] Gathering logs for kube-scheduler [41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a] ... I0507 22:42:46.411652 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a" I0507 22:42:46.438625 668555 out.go:304] Setting ErrFile to fd 2... I0507 22:42:46.438647 668555 out.go:338] TERM=,COLORTERM=, which probably does not support color W0507 22:42:46.438765 668555 out.go:235] X Problems detected in kubelet: W0507 22:42:46.438780 668555 out.go:424] no arguments passed for " May 07 22:41:17 bridge-20210507224024-391940 kubelet[1180]: E0507 22:41:17.449069 1180 pod_workers.go:191] Error syncing pod fdcac5df-b94e-462a-87d5-98af36032464 (\"coredns-74ff55c5b-wdngz_kube-system(fdcac5df-b94e-462a-87d5-98af36032464)\"), skipping: failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"cannot find volume \\\"config-volume\\\" to mount into container \\\"coredns\\\"\"\n" - returning raw string W0507 22:42:46.438798 668555 out.go:235] May 07 22:41:17 bridge-20210507224024-391940 kubelet[1180]: E0507 22:41:17.449069 1180 pod_workers.go:191] Error syncing pod fdcac5df-b94e-462a-87d5-98af36032464 ("coredns-74ff55c5b-wdngz_kube-system(fdcac5df-b94e-462a-87d5-98af36032464)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "cannot find volume \"config-volume\" to mount into container \"coredns\"" I0507 22:42:46.438810 668555 out.go:304] Setting ErrFile to fd 2... I0507 22:42:46.438819 668555 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:42:48.270916 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:50.769708 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:53.270043 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:55.769030 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:57.769122 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:56.440049 668555 api_server.go:223] Checking apiserver healthz at https://192.168.94.2:8443/healthz ... I0507 22:42:56.445620 668555 api_server.go:249] https://192.168.94.2:8443/healthz returned 200: ok I0507 22:42:56.446505 668555 api_server.go:139] control plane version: v1.20.2 I0507 22:42:56.446528 668555 api_server.go:129] duration metric: took 10.599861577s to wait for apiserver health ... I0507 22:42:56.446537 668555 system_pods.go:43] waiting for kube-system pods to appear ... I0507 22:42:56.446560 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]} I0507 22:42:56.446607 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0507 22:42:56.470123 668555 cri.go:76] found id: "6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef" I0507 22:42:56.470146 668555 cri.go:76] found id: "" I0507 22:42:56.470154 668555 logs.go:270] 1 containers: [6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef] I0507 22:42:56.470204 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:56.473177 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]} I0507 22:42:56.473233 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd I0507 22:42:56.494263 668555 cri.go:76] found id: "e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7" I0507 22:42:56.494283 668555 cri.go:76] found id: "" I0507 22:42:56.494289 668555 logs.go:270] 1 containers: [e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7] I0507 22:42:56.494326 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:56.497102 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]} I0507 22:42:56.497152 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns I0507 22:42:56.519079 668555 cri.go:76] found id: "2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead" I0507 22:42:56.519095 668555 cri.go:76] found id: "" I0507 22:42:56.519100 668555 logs.go:270] 1 containers: [2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead] I0507 22:42:56.519133 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:56.521800 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]} I0507 22:42:56.521860 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0507 22:42:56.542895 668555 cri.go:76] found id: "41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a" I0507 22:42:56.542918 668555 cri.go:76] found id: "" I0507 22:42:56.542925 668555 logs.go:270] 1 containers: [41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a] I0507 22:42:56.542967 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:56.545669 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]} I0507 22:42:56.545725 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy I0507 22:42:56.566786 668555 cri.go:76] found id: "bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163" I0507 22:42:56.566804 668555 cri.go:76] found id: "" I0507 22:42:56.566811 668555 logs.go:270] 1 containers: [bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163] I0507 22:42:56.566852 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:56.569557 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]} I0507 22:42:56.569605 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0507 22:42:56.590459 668555 cri.go:76] found id: "" I0507 22:42:56.590476 668555 logs.go:270] 0 containers: [] W0507 22:42:56.590481 668555 logs.go:272] No container was found matching "kubernetes-dashboard" I0507 22:42:56.590486 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]} I0507 22:42:56.590530 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0507 22:42:56.613112 668555 cri.go:76] found id: "ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac" I0507 22:42:56.613133 668555 cri.go:76] found id: "" I0507 22:42:56.613141 668555 logs.go:270] 1 containers: [ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac] I0507 22:42:56.613189 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:56.615906 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]} I0507 22:42:56.615966 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0507 22:42:56.637316 668555 cri.go:76] found id: "61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf" I0507 22:42:56.637364 668555 cri.go:76] found id: "" I0507 22:42:56.637379 668555 logs.go:270] 1 containers: [61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf] I0507 22:42:56.637445 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:56.640583 668555 logs.go:123] Gathering logs for kubelet ... I0507 22:42:56.640605 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" W0507 22:42:56.693338 668555 logs.go:138] Found kubelet problem: May 07 22:41:17 bridge-20210507224024-391940 kubelet[1180]: E0507 22:41:17.449069 1180 pod_workers.go:191] Error syncing pod fdcac5df-b94e-462a-87d5-98af36032464 ("coredns-74ff55c5b-wdngz_kube-system(fdcac5df-b94e-462a-87d5-98af36032464)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "cannot find volume \"config-volume\" to mount into container \"coredns\"" I0507 22:42:56.693786 668555 logs.go:123] Gathering logs for describe nodes ... I0507 22:42:56.693806 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0507 22:42:56.785105 668555 logs.go:123] Gathering logs for etcd [e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7] ... I0507 22:42:56.785137 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7" I0507 22:42:56.812625 668555 logs.go:123] Gathering logs for kube-controller-manager [61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf] ... I0507 22:42:56.812654 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf" I0507 22:42:56.846239 668555 logs.go:123] Gathering logs for container status ... I0507 22:42:56.846273 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0507 22:42:56.872696 668555 logs.go:123] Gathering logs for dmesg ... I0507 22:42:56.872729 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0507 22:42:56.896306 668555 logs.go:123] Gathering logs for kube-apiserver [6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef] ... I0507 22:42:56.896334 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef" I0507 22:42:56.932309 668555 logs.go:123] Gathering logs for coredns [2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead] ... I0507 22:42:56.932339 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead" I0507 22:42:56.954736 668555 logs.go:123] Gathering logs for kube-scheduler [41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a] ... I0507 22:42:56.954763 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a" I0507 22:42:56.980162 668555 logs.go:123] Gathering logs for kube-proxy [bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163] ... I0507 22:42:56.980188 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163" I0507 22:42:57.001913 668555 logs.go:123] Gathering logs for storage-provisioner [ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac] ... I0507 22:42:57.001936 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac" I0507 22:42:57.024379 668555 logs.go:123] Gathering logs for containerd ... I0507 22:42:57.024411 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400" I0507 22:42:57.054923 668555 out.go:304] Setting ErrFile to fd 2... I0507 22:42:57.054944 668555 out.go:338] TERM=,COLORTERM=, which probably does not support color W0507 22:42:57.055040 668555 out.go:235] X Problems detected in kubelet: W0507 22:42:57.055053 668555 out.go:424] no arguments passed for " May 07 22:41:17 bridge-20210507224024-391940 kubelet[1180]: E0507 22:41:17.449069 1180 pod_workers.go:191] Error syncing pod fdcac5df-b94e-462a-87d5-98af36032464 (\"coredns-74ff55c5b-wdngz_kube-system(fdcac5df-b94e-462a-87d5-98af36032464)\"), skipping: failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"cannot find volume \\\"config-volume\\\" to mount into container \\\"coredns\\\"\"\n" - returning raw string W0507 22:42:57.055069 668555 out.go:235] May 07 22:41:17 bridge-20210507224024-391940 kubelet[1180]: E0507 22:41:17.449069 1180 pod_workers.go:191] Error syncing pod fdcac5df-b94e-462a-87d5-98af36032464 ("coredns-74ff55c5b-wdngz_kube-system(fdcac5df-b94e-462a-87d5-98af36032464)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "cannot find volume \"config-volume\" to mount into container \"coredns\"" I0507 22:42:57.055074 668555 out.go:304] Setting ErrFile to fd 2... I0507 22:42:57.055078 668555 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:42:59.769527 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:02.269666 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:07.059818 668555 system_pods.go:59] 7 kube-system pods found I0507 22:43:07.059851 668555 system_pods.go:61] "coredns-74ff55c5b-kn5r7" [9ddaf16f-4215-42a6-9c1e-6e41c9849ed7] Running I0507 22:43:07.059857 668555 system_pods.go:61] "etcd-bridge-20210507224024-391940" [3c78015b-db5c-4fe5-99b1-0109a5427769] Running I0507 22:43:07.059861 668555 system_pods.go:61] "kube-apiserver-bridge-20210507224024-391940" [5ae80380-0e21-4a97-be4c-5525da123dc4] Running I0507 22:43:07.059865 668555 system_pods.go:61] "kube-controller-manager-bridge-20210507224024-391940" [b564a595-393e-4968-a05d-54f07b816bcc] Running I0507 22:43:07.059869 668555 system_pods.go:61] "kube-proxy-ws99c" [d3170feb-4f47-4975-9f18-54a7340c425c] Running I0507 22:43:07.059873 668555 system_pods.go:61] "kube-scheduler-bridge-20210507224024-391940" [5c9a05aa-1efb-4844-9dec-d9729b234f6e] Running I0507 22:43:07.059876 668555 system_pods.go:61] "storage-provisioner" [2c84fe99-a93a-4e7b-879f-88e8f8fba4ca] Running I0507 22:43:07.059881 668555 system_pods.go:74] duration metric: took 10.613338832s to wait for pod list to return data ... I0507 22:43:07.059893 668555 default_sa.go:34] waiting for default service account to be created ... I0507 22:43:07.062004 668555 default_sa.go:45] found service account: "default" I0507 22:43:07.062023 668555 default_sa.go:55] duration metric: took 2.120934ms for default service account to be created ... I0507 22:43:07.062033 668555 system_pods.go:116] waiting for k8s-apps to be running ... I0507 22:43:07.065589 668555 system_pods.go:86] 7 kube-system pods found I0507 22:43:07.065610 668555 system_pods.go:89] "coredns-74ff55c5b-kn5r7" [9ddaf16f-4215-42a6-9c1e-6e41c9849ed7] Running I0507 22:43:07.065616 668555 system_pods.go:89] "etcd-bridge-20210507224024-391940" [3c78015b-db5c-4fe5-99b1-0109a5427769] Running I0507 22:43:07.065621 668555 system_pods.go:89] "kube-apiserver-bridge-20210507224024-391940" [5ae80380-0e21-4a97-be4c-5525da123dc4] Running I0507 22:43:07.065625 668555 system_pods.go:89] "kube-controller-manager-bridge-20210507224024-391940" [b564a595-393e-4968-a05d-54f07b816bcc] Running I0507 22:43:07.065629 668555 system_pods.go:89] "kube-proxy-ws99c" [d3170feb-4f47-4975-9f18-54a7340c425c] Running I0507 22:43:07.065633 668555 system_pods.go:89] "kube-scheduler-bridge-20210507224024-391940" [5c9a05aa-1efb-4844-9dec-d9729b234f6e] Running I0507 22:43:07.065637 668555 system_pods.go:89] "storage-provisioner" [2c84fe99-a93a-4e7b-879f-88e8f8fba4ca] Running I0507 22:43:07.065643 668555 system_pods.go:126] duration metric: took 3.604652ms to wait for k8s-apps to be running ... I0507 22:43:07.065649 668555 system_svc.go:44] waiting for kubelet service to be running .... I0507 22:43:07.065691 668555 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0507 22:43:07.075415 668555 system_svc.go:56] duration metric: took 9.760919ms WaitForService to wait for kubelet. I0507 22:43:07.075436 668555 kubeadm.go:538] duration metric: took 1m49.498734907s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ... I0507 22:43:07.075454 668555 node_conditions.go:102] verifying NodePressure condition ... I0507 22:43:07.078087 668555 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki I0507 22:43:07.078111 668555 node_conditions.go:123] node cpu capacity is 8 I0507 22:43:07.078125 668555 node_conditions.go:105] duration metric: took 2.66501ms to run NodePressure ... I0507 22:43:07.078136 668555 start.go:206] waiting for startup goroutines ... I0507 22:43:07.122445 668555 start.go:460] kubectl: 1.20.5, cluster: 1.20.2 (minor skew: 0) I0507 22:43:07.124854 668555 out.go:170] * Done! kubectl is now configured to use "bridge-20210507224024-391940" cluster and "default" namespace by default I0507 22:43:04.769578 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:06.769758 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:11.445319 634245 system_pods.go:86] 7 kube-system pods found I0507 22:43:11.445357 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:43:11.445365 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:43:11.445371 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:43:11.445376 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:43:11.445380 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:43:11.445384 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:43:11.445388 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:43:11.445411 634245 retry.go:31] will retry after 1m7.577191067s: missing components: kube-dns I0507 22:43:08.770354 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:10.770498 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:13.271448 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:15.770804 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:18.269214 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:20.269718 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:22.769151 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:24.771659 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:27.269262 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:29.269802 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:31.769488 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:33.769541 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:36.268974 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:38.269261 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:40.270280 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:42.771006 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:45.269345 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:47.768594 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:49.769670 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:52.269433 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:54.769190 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:56.769657 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:59.269644 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:01.269772 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:03.769233 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:05.769576 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:08.269493 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:10.769584 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:12.770143 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:15.269008 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:17.269047 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:19.027342 634245 system_pods.go:86] 7 kube-system pods found I0507 22:44:19.027380 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:44:19.027389 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:44:19.027395 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:44:19.027400 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:44:19.027404 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:44:19.027408 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:44:19.027412 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:44:19.030342 634245 out.go:170] W0507 22:44:19.030464 634245 out.go:235] X Exiting due to GUEST_START: wait 5m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns W0507 22:44:19.030480 634245 out.go:424] no arguments passed for "* \n" - returning raw string W0507 22:44:19.030488 634245 out.go:235] * W0507 22:44:19.030504 634245 out.go:424] no arguments passed for "* If the above advice does not help, please let us know:\n" - returning raw string W0507 22:44:19.030511 634245 out.go:424] no arguments passed for " https://github.com/kubernetes/minikube/issues/new/choose\n\n" - returning raw string W0507 22:44:19.030516 634245 out.go:424] no arguments passed for "* Please attach the following file to the GitHub issue:\n" - returning raw string W0507 22:44:19.030577 634245 out.go:424] no arguments passed for "* If the above advice does not help, please let us know:\n https://github.com/kubernetes/minikube/issues/new/choose\n\n* Please attach the following file to the GitHub issue:\n* - /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/logs/lastStart.txt\n\n" - returning raw string W0507 22:44:19.032358 634245 out.go:235] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ W0507 22:44:19.032373 634245 out.go:235] │ │ W0507 22:44:19.032378 634245 out.go:235] │ * If the above advice does not help, please let us know: │ W0507 22:44:19.032383 634245 out.go:235] │ https://github.com/kubernetes/minikube/issues/new/choose │ W0507 22:44:19.032389 634245 out.go:235] │ │ W0507 22:44:19.032394 634245 out.go:235] │ * Please attach the following file to the GitHub issue: │ W0507 22:44:19.032399 634245 out.go:235] │ * - /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/logs/lastStart.txt │ W0507 22:44:19.032408 634245 out.go:235] │ │ W0507 22:44:19.032412 634245 out.go:235] ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ W0507 22:44:19.032420 634245 out.go:235] * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID d14ceb5681dd7 6e38f40d628db 9 minutes ago Running storage-provisioner 0 93c0b1af44e30 313c5cc700f90 43154ddb57a83 9 minutes ago Running kube-proxy 0 bb73f6c52753b 16a5d9bfbb01a a27166429d98e 10 minutes ago Running kube-controller-manager 0 1c668186ace19 469df8196853f ed2c44fbdd78b 10 minutes ago Running kube-scheduler 0 f568c37bc70e9 65b0f048ab917 0369cf4303ffd 10 minutes ago Running etcd 0 a0ae6254b938d a6bffe1f7c2d3 a8c2fdb8bf76e 10 minutes ago Running kube-apiserver 0 098ed0d807d5b * * ==> containerd <== * -- Logs begin at Fri 2021-05-07 22:33:43 UTC, end at Fri 2021-05-07 22:44:19 UTC. -- May 07 22:41:09 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:41:09.185977749Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-q8wsb,Uid:88c0b410-63d1-4438-992a-1980770e1223,Namespace:kube-system,Attempt:0,}" May 07 22:41:19 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:41:19.330571700Z" level=error msg="Failed to destroy network for sandbox \"983ccc086daadbd4b920bb858a7accb5d636d3139e423931eeea64d84e16c7cd\"" error="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.20 -j CNI-6b60fbbfc3da4e6b98ddcaa3 -m comment --comment name: \"crio\" id: \"983ccc086daadbd4b920bb858a7accb5d636d3139e423931eeea64d84e16c7cd\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-6b60fbbfc3da4e6b98ddcaa3':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" May 07 22:41:19 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:41:19.347649408Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-q8wsb,Uid:88c0b410-63d1-4438-992a-1980770e1223,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"983ccc086daadbd4b920bb858a7accb5d636d3139e423931eeea64d84e16c7cd\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:41:33 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:41:33.186047110Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-q8wsb,Uid:88c0b410-63d1-4438-992a-1980770e1223,Namespace:kube-system,Attempt:0,}" May 07 22:41:43 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:41:43.438482015Z" level=error msg="Failed to destroy network for sandbox \"8bd5e7e095ad3a6b916633e76dd25f3f964d797ef53c8d7cb506286dc8183a82\"" error="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.21 -j CNI-4dae511cace90aa4a9765a01 -m comment --comment name: \"crio\" id: \"8bd5e7e095ad3a6b916633e76dd25f3f964d797ef53c8d7cb506286dc8183a82\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4dae511cace90aa4a9765a01':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" May 07 22:41:43 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:41:43.463622432Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-q8wsb,Uid:88c0b410-63d1-4438-992a-1980770e1223,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8bd5e7e095ad3a6b916633e76dd25f3f964d797ef53c8d7cb506286dc8183a82\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:41:55 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:41:55.186004269Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-q8wsb,Uid:88c0b410-63d1-4438-992a-1980770e1223,Namespace:kube-system,Attempt:0,}" May 07 22:42:05 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:42:05.366590595Z" level=error msg="Failed to destroy network for sandbox \"5f6e0e9da02e61df4784e53b05bd56f16ebca0735986df12f62b3f21aa58b3d8\"" error="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.22 -j CNI-cd0c62c75aff93b17721fd26 -m comment --comment name: \"crio\" id: \"5f6e0e9da02e61df4784e53b05bd56f16ebca0735986df12f62b3f21aa58b3d8\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-cd0c62c75aff93b17721fd26':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" May 07 22:42:05 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:42:05.391638452Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-q8wsb,Uid:88c0b410-63d1-4438-992a-1980770e1223,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f6e0e9da02e61df4784e53b05bd56f16ebca0735986df12f62b3f21aa58b3d8\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:42:20 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:42:20.186166205Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-q8wsb,Uid:88c0b410-63d1-4438-992a-1980770e1223,Namespace:kube-system,Attempt:0,}" May 07 22:42:30 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:42:30.326629045Z" level=error msg="Failed to destroy network for sandbox \"127b66e754bef83b34c7368cbf7de528503c7e97c0a62d03adf5c79c7b95d7f1\"" error="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.23 -j CNI-4c50924df69a7382df85cfa7 -m comment --comment name: \"crio\" id: \"127b66e754bef83b34c7368cbf7de528503c7e97c0a62d03adf5c79c7b95d7f1\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4c50924df69a7382df85cfa7':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" May 07 22:42:30 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:42:30.347626506Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-q8wsb,Uid:88c0b410-63d1-4438-992a-1980770e1223,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"127b66e754bef83b34c7368cbf7de528503c7e97c0a62d03adf5c79c7b95d7f1\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:42:44 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:42:44.186245358Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-q8wsb,Uid:88c0b410-63d1-4438-992a-1980770e1223,Namespace:kube-system,Attempt:0,}" May 07 22:42:54 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:42:54.334402989Z" level=error msg="Failed to destroy network for sandbox \"71c1b0d3f14d4415af946164863fa337bc0570c0609360351134c7cb8e754ee2\"" error="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.24 -j CNI-bd0898c87a7a2efdceb22ba8 -m comment --comment name: \"crio\" id: \"71c1b0d3f14d4415af946164863fa337bc0570c0609360351134c7cb8e754ee2\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-bd0898c87a7a2efdceb22ba8':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" May 07 22:42:54 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:42:54.359620768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-q8wsb,Uid:88c0b410-63d1-4438-992a-1980770e1223,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"71c1b0d3f14d4415af946164863fa337bc0570c0609360351134c7cb8e754ee2\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:43:06 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:43:06.186304487Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-q8wsb,Uid:88c0b410-63d1-4438-992a-1980770e1223,Namespace:kube-system,Attempt:0,}" May 07 22:43:16 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:43:16.347445587Z" level=error msg="Failed to destroy network for sandbox \"9f213444526f7b5d9269b6c74cf97463fb60874c4d19a9e650c4cf5fec8f68fd\"" error="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.25 -j CNI-3e7e87e4399a9658b88699e7 -m comment --comment name: \"crio\" id: \"9f213444526f7b5d9269b6c74cf97463fb60874c4d19a9e650c4cf5fec8f68fd\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-3e7e87e4399a9658b88699e7':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" May 07 22:43:16 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:43:16.379624242Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-q8wsb,Uid:88c0b410-63d1-4438-992a-1980770e1223,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9f213444526f7b5d9269b6c74cf97463fb60874c4d19a9e650c4cf5fec8f68fd\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:43:30 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:43:30.186191717Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-q8wsb,Uid:88c0b410-63d1-4438-992a-1980770e1223,Namespace:kube-system,Attempt:0,}" May 07 22:43:40 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:43:40.350708567Z" level=error msg="Failed to destroy network for sandbox \"22f31adf91457e435c78f01a2de778ebe374bc6858a9b11f5fed44a1d0ba4fa5\"" error="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.26 -j CNI-3f9d7180c1efa45fc5ff9bda -m comment --comment name: \"crio\" id: \"22f31adf91457e435c78f01a2de778ebe374bc6858a9b11f5fed44a1d0ba4fa5\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-3f9d7180c1efa45fc5ff9bda':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" May 07 22:43:40 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:43:40.367624894Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-q8wsb,Uid:88c0b410-63d1-4438-992a-1980770e1223,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"22f31adf91457e435c78f01a2de778ebe374bc6858a9b11f5fed44a1d0ba4fa5\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:43:53 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:43:53.186162721Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-q8wsb,Uid:88c0b410-63d1-4438-992a-1980770e1223,Namespace:kube-system,Attempt:0,}" May 07 22:44:03 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:44:03.358750545Z" level=error msg="Failed to destroy network for sandbox \"c2453cb7ec7e223596fae6f4b4e566579765a10eceb3bd5c01806cbefd8c6b04\"" error="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.27 -j CNI-f02c93c9cff9b0b1fd30bc1d -m comment --comment name: \"crio\" id: \"c2453cb7ec7e223596fae6f4b4e566579765a10eceb3bd5c01806cbefd8c6b04\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-f02c93c9cff9b0b1fd30bc1d':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" May 07 22:44:03 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:44:03.375591701Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-q8wsb,Uid:88c0b410-63d1-4438-992a-1980770e1223,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c2453cb7ec7e223596fae6f4b4e566579765a10eceb3bd5c01806cbefd8c6b04\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:44:18 false-20210507223341-391940 containerd[458]: time="2021-05-07T22:44:18.186203375Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-q8wsb,Uid:88c0b410-63d1-4438-992a-1980770e1223,Namespace:kube-system,Attempt:0,}" * * ==> describe nodes <== * Name: false-20210507223341-391940 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=false-20210507223341-391940 kubernetes.io/os=linux minikube.k8s.io/commit=c31bd57f93d45726e4bd30607374f8c720e70e95 minikube.k8s.io/name=false-20210507223341-391940 minikube.k8s.io/updated_at=2021_05_07T22_34_25_0700 minikube.k8s.io/version=v1.20.0 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Fri, 07 May 2021 22:34:14 +0000 Taints: Unschedulable: false Lease: HolderIdentity: false-20210507223341-391940 AcquireTime: RenewTime: Fri, 07 May 2021 22:44:16 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Fri, 07 May 2021 22:40:00 +0000 Fri, 07 May 2021 22:34:11 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Fri, 07 May 2021 22:40:00 +0000 Fri, 07 May 2021 22:34:11 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Fri, 07 May 2021 22:40:00 +0000 Fri, 07 May 2021 22:34:11 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Fri, 07 May 2021 22:40:00 +0000 Fri, 07 May 2021 22:34:40 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.67.2 Hostname: false-20210507223341-391940 Capacity: cpu: 8 ephemeral-storage: 309568300Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32951376Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 309568300Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32951376Ki pods: 110 System Info: Machine ID: 822f5ed6656e44929f6c2cc5d6881453 System UUID: ed296a4f-cf88-4dca-9918-6b09c891d9f3 Boot ID: a4d5e757-68dd-498f-8a27-b6d8b368f45c Kernel Version: 4.9.0-15-amd64 OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: containerd://1.4.4 Kubelet Version: v1.20.2 Kube-Proxy Version: v1.20.2 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (7 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-74ff55c5b-q8wsb 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 9m40s kube-system etcd-false-20210507223341-391940 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 9m50s kube-system kube-apiserver-false-20210507223341-391940 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m50s kube-system kube-controller-manager-false-20210507223341-391940 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m50s kube-system kube-proxy-bmhxt 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m40s kube-system kube-scheduler-false-20210507223341-391940 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m50s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m39s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) ephemeral-storage 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientMemory 10m (x5 over 10m) kubelet Node false-20210507223341-391940 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 10m (x4 over 10m) kubelet Node false-20210507223341-391940 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 10m (x4 over 10m) kubelet Node false-20210507223341-391940 status is now: NodeHasSufficientPID Normal Starting 9m50s kubelet Starting kubelet. Normal NodeHasSufficientMemory 9m50s kubelet Node false-20210507223341-391940 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 9m50s kubelet Node false-20210507223341-391940 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 9m50s kubelet Node false-20210507223341-391940 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 9m50s kubelet Updated Node Allocatable limit across pods Normal NodeReady 9m40s kubelet Node false-20210507223341-391940 status is now: NodeReady Normal Starting 9m39s kube-proxy Starting kube-proxy. * * ==> dmesg <== * [ +0.000002] ll header: 00000000: ff ff ff ff ff ff da b0 18 fd a8 43 08 06 ...........C.. [ +22.424254] IPv4: martian source 10.85.0.4 from 10.85.0.4, on dev eth0 [ +0.000003] ll header: 00000000: ff ff ff ff ff ff 9a ce f9 ec f7 3b 08 06 ...........;.. [ +1.589728] IPv4: martian source 10.85.0.24 from 10.85.0.24, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff e6 00 77 b2 11 ce 08 06 ........w..... [May 7 22:43] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff 02 11 2e 55 cd 5b 08 06 .........U.[.. [ +7.688066] IPv4: martian source 10.85.0.5 from 10.85.0.5, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff 52 52 39 43 97 9b 08 06 ......RR9C.... [ +0.152613] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0 [ +0.000003] ll header: 00000000: ff ff ff ff ff ff 02 11 2e 55 cd 5b 08 06 .........U.[.. [ +0.000295] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff ce 1d 57 a8 93 37 08 06 ........W..7.. [ +0.420471] IPv4: martian source 10.85.0.25 from 10.85.0.25, on dev eth0 [ +0.000003] ll header: 00000000: ff ff ff ff ff ff ae c0 70 d9 43 f8 08 06 ........p.C... [ +20.407815] IPv4: martian source 10.85.0.6 from 10.85.0.6, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff d6 35 e2 c4 32 c9 08 06 .......5..2... [ +3.593369] IPv4: martian source 10.85.0.26 from 10.85.0.26, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff 46 04 16 6c ad 9e 08 06 ......F..l.... [ +18.447912] IPv4: martian source 10.85.0.7 from 10.85.0.7, on dev eth0 [ +0.000003] ll header: 00000000: ff ff ff ff ff ff 76 b6 03 ab 2e cd 08 06 ......v....... [May 7 22:44] IPv4: martian source 10.85.0.27 from 10.85.0.27, on dev eth0 [ +0.000003] ll header: 00000000: ff ff ff ff ff ff b6 a0 69 4d 50 4c 08 06 ........iMPL.. [ +16.435267] IPv4: martian source 10.85.0.8 from 10.85.0.8, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff ce cf 01 9b 90 ad 08 06 .............. * * ==> etcd [65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb] <== * 2021-05-07 22:41:34.886762 W | etcdserver: read-only range request "key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true " with result "range_response_count:0 size:7" took too long (2.429739193s) to execute 2021-05-07 22:41:36.409082 W | wal: sync duration of 1.521016836s, expected less than 1s 2021-05-07 22:41:36.410794 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:644" took too long (1.522177976s) to execute 2021-05-07 22:41:36.410853 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.514097878s) to execute 2021-05-07 22:41:36.411108 W | etcdserver: read-only range request "key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true " with result "range_response_count:0 size:5" took too long (590.2819ms) to execute 2021-05-07 22:41:36.411288 W | etcdserver: read-only range request "key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true " with result "range_response_count:0 size:7" took too long (1.477593688s) to execute 2021-05-07 22:41:36.411429 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (1.511801508s) to execute 2021-05-07 22:41:42.373743 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:41:52.373847 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:42:02.373778 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:42:12.373839 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:42:22.373795 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:42:32.373802 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:42:42.373830 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:42:52.373772 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:43:02.373770 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:43:12.373800 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:43:22.373780 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:43:32.373760 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:43:42.373761 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:43:52.373785 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:44:02.373776 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:44:11.862740 I | mvcc: store.index: compact 659 2021-05-07 22:44:11.863670 I | mvcc: finished scheduled compaction at 659 (took 691.009µs) 2021-05-07 22:44:12.373810 I | etcdserver/api/etcdhttp: /health OK (status code 200) * * ==> kernel <== * 22:44:20 up 3:23, 0 users, load average: 0.85, 2.03, 2.21 Linux false-20210507223341-391940 4.9.0-15-amd64 #1 SMP Debian 4.9.258-1 (2021-03-08) x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a] <== * Trace[487759212]: ---"Object stored in database" 1366ms (22:41:00.413) Trace[487759212]: [1.366461161s] [1.366461161s] END I0507 22:41:36.413980 1 trace.go:205] Trace[1979753258]: "Update" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.67.2 (07-May-2021 22:41:34.895) (total time: 1518ms): Trace[1979753258]: ---"Object stored in database" 1518ms (22:41:00.413) Trace[1979753258]: [1.518934058s] [1.518934058s] END I0507 22:41:36.416553 1 trace.go:205] Trace[425304245]: "List etcd3" key:/cronjobs,resourceVersion:,resourceVersionMatch:,limit:500,continue: (07-May-2021 22:41:34.899) (total time: 1517ms): Trace[425304245]: [1.517225745s] [1.517225745s] END I0507 22:41:36.416637 1 trace.go:205] Trace[637009294]: "List" url:/apis/batch/v1beta1/cronjobs,user-agent:kube-controller-manager/v1.20.2 (linux/amd64) kubernetes/faecb19/system:serviceaccount:kube-system:cronjob-controller,client:192.168.67.2 (07-May-2021 22:41:34.899) (total time: 1517ms): Trace[637009294]: ---"Listing from storage done" 1517ms (22:41:00.416) Trace[637009294]: [1.51734919s] [1.51734919s] END I0507 22:41:52.805320 1 client.go:360] parsed scheme: "passthrough" I0507 22:41:52.805370 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0507 22:41:52.805380 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0507 22:42:26.183398 1 client.go:360] parsed scheme: "passthrough" I0507 22:42:26.183439 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0507 22:42:26.183449 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0507 22:43:00.977712 1 client.go:360] parsed scheme: "passthrough" I0507 22:43:00.977755 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0507 22:43:00.977764 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0507 22:43:37.853712 1 client.go:360] parsed scheme: "passthrough" I0507 22:43:37.853754 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0507 22:43:37.853761 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0507 22:44:11.778600 1 client.go:360] parsed scheme: "passthrough" I0507 22:44:11.778653 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0507 22:44:11.778661 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * * ==> kube-controller-manager [16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984] <== * I0507 22:34:40.332345 1 event.go:291] "Event occurred" object="false-20210507223341-391940" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node false-20210507223341-391940 event: Registered Node false-20210507223341-391940 in Controller" I0507 22:34:40.332504 1 shared_informer.go:247] Caches are synced for attach detach I0507 22:34:40.334565 1 shared_informer.go:247] Caches are synced for expand I0507 22:34:40.343385 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2" I0507 22:34:40.346018 1 shared_informer.go:247] Caches are synced for PVC protection I0507 22:34:40.352336 1 range_allocator.go:373] Set node false-20210507223341-391940 PodCIDR to [10.244.0.0/24] I0507 22:34:40.352781 1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-false-20210507223341-391940" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" I0507 22:34:40.352824 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-tzwx6" I0507 22:34:40.431616 1 shared_informer.go:247] Caches are synced for persistent volume I0507 22:34:40.431617 1 shared_informer.go:247] Caches are synced for disruption I0507 22:34:40.431740 1 disruption.go:339] Sending events to api server. I0507 22:34:40.431763 1 shared_informer.go:247] Caches are synced for stateful set E0507 22:34:40.432216 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again I0507 22:34:40.432869 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-q8wsb" I0507 22:34:40.457134 1 shared_informer.go:247] Caches are synced for endpoint I0507 22:34:40.460809 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0507 22:34:40.583896 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1" I0507 22:34:40.588567 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-tzwx6" I0507 22:34:40.622914 1 shared_informer.go:247] Caches are synced for resource quota I0507 22:34:40.624522 1 shared_informer.go:247] Caches are synced for resource quota I0507 22:34:40.676979 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0507 22:34:40.977190 1 shared_informer.go:247] Caches are synced for garbage collector I0507 22:34:41.022715 1 shared_informer.go:247] Caches are synced for garbage collector I0507 22:34:41.022747 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0507 22:34:45.332205 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode. * * ==> kube-proxy [313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4] <== * I0507 22:34:41.275343 1 node.go:172] Successfully retrieved node IP: 192.168.67.2 I0507 22:34:41.275410 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.67.2), assume IPv4 operation W0507 22:34:41.289997 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy I0507 22:34:41.290089 1 server_others.go:185] Using iptables Proxier. I0507 22:34:41.290364 1 server.go:650] Version: v1.20.2 I0507 22:34:41.290898 1 conntrack.go:52] Setting nf_conntrack_max to 262144 I0507 22:34:41.290990 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I0507 22:34:41.291049 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I0507 22:34:41.291756 1 config.go:315] Starting service config controller I0507 22:34:41.291773 1 shared_informer.go:240] Waiting for caches to sync for service config I0507 22:34:41.291795 1 config.go:224] Starting endpoint slice config controller I0507 22:34:41.291805 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0507 22:34:41.392004 1 shared_informer.go:247] Caches are synced for endpoint slice config I0507 22:34:41.392003 1 shared_informer.go:247] Caches are synced for service config * * ==> kube-scheduler [469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce] <== * E0507 22:34:15.870301 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0507 22:34:15.928518 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0507 22:34:15.968129 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0507 22:34:15.972512 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0507 22:34:16.099671 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0507 22:34:16.170197 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0507 22:34:16.187315 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0507 22:34:16.237466 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0507 22:34:16.342408 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0507 22:34:17.687147 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0507 22:34:17.934958 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0507 22:34:18.160109 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0507 22:34:18.243727 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0507 22:34:18.288342 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0507 22:34:18.363614 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0507 22:34:18.363615 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0507 22:34:18.562331 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0507 22:34:18.671897 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0507 22:34:18.828784 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0507 22:34:18.941255 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0507 22:34:19.129533 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0507 22:34:21.943826 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0507 22:34:22.068496 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0507 22:34:22.539796 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope I0507 22:34:24.033689 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Fri 2021-05-07 22:33:43 UTC, end at Fri 2021-05-07 22:44:20 UTC. -- May 07 22:41:43 false-20210507223341-391940 kubelet[1195]: E0507 22:41:43.463982 1195 pod_workers.go:191] Error syncing pod 88c0b410-63d1-4438-992a-1980770e1223 ("coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"8bd5e7e095ad3a6b916633e76dd25f3f964d797ef53c8d7cb506286dc8183a82\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:42:05 false-20210507223341-391940 kubelet[1195]: E0507 22:42:05.391890 1195 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "5f6e0e9da02e61df4784e53b05bd56f16ebca0735986df12f62b3f21aa58b3d8": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:42:05 false-20210507223341-391940 kubelet[1195]: E0507 22:42:05.391961 1195 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "5f6e0e9da02e61df4784e53b05bd56f16ebca0735986df12f62b3f21aa58b3d8": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:42:05 false-20210507223341-391940 kubelet[1195]: E0507 22:42:05.391979 1195 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "5f6e0e9da02e61df4784e53b05bd56f16ebca0735986df12f62b3f21aa58b3d8": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:42:05 false-20210507223341-391940 kubelet[1195]: E0507 22:42:05.392032 1195 pod_workers.go:191] Error syncing pod 88c0b410-63d1-4438-992a-1980770e1223 ("coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"5f6e0e9da02e61df4784e53b05bd56f16ebca0735986df12f62b3f21aa58b3d8\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:42:30 false-20210507223341-391940 kubelet[1195]: E0507 22:42:30.347838 1195 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "127b66e754bef83b34c7368cbf7de528503c7e97c0a62d03adf5c79c7b95d7f1": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:42:30 false-20210507223341-391940 kubelet[1195]: E0507 22:42:30.347902 1195 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "127b66e754bef83b34c7368cbf7de528503c7e97c0a62d03adf5c79c7b95d7f1": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:42:30 false-20210507223341-391940 kubelet[1195]: E0507 22:42:30.347920 1195 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "127b66e754bef83b34c7368cbf7de528503c7e97c0a62d03adf5c79c7b95d7f1": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:42:30 false-20210507223341-391940 kubelet[1195]: E0507 22:42:30.347969 1195 pod_workers.go:191] Error syncing pod 88c0b410-63d1-4438-992a-1980770e1223 ("coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"127b66e754bef83b34c7368cbf7de528503c7e97c0a62d03adf5c79c7b95d7f1\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:42:54 false-20210507223341-391940 kubelet[1195]: E0507 22:42:54.359847 1195 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "71c1b0d3f14d4415af946164863fa337bc0570c0609360351134c7cb8e754ee2": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:42:54 false-20210507223341-391940 kubelet[1195]: E0507 22:42:54.359918 1195 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "71c1b0d3f14d4415af946164863fa337bc0570c0609360351134c7cb8e754ee2": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:42:54 false-20210507223341-391940 kubelet[1195]: E0507 22:42:54.359937 1195 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "71c1b0d3f14d4415af946164863fa337bc0570c0609360351134c7cb8e754ee2": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:42:54 false-20210507223341-391940 kubelet[1195]: E0507 22:42:54.360013 1195 pod_workers.go:191] Error syncing pod 88c0b410-63d1-4438-992a-1980770e1223 ("coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"71c1b0d3f14d4415af946164863fa337bc0570c0609360351134c7cb8e754ee2\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:43:16 false-20210507223341-391940 kubelet[1195]: E0507 22:43:16.379866 1195 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "9f213444526f7b5d9269b6c74cf97463fb60874c4d19a9e650c4cf5fec8f68fd": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:43:16 false-20210507223341-391940 kubelet[1195]: E0507 22:43:16.379935 1195 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "9f213444526f7b5d9269b6c74cf97463fb60874c4d19a9e650c4cf5fec8f68fd": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:43:16 false-20210507223341-391940 kubelet[1195]: E0507 22:43:16.379950 1195 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "9f213444526f7b5d9269b6c74cf97463fb60874c4d19a9e650c4cf5fec8f68fd": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:43:16 false-20210507223341-391940 kubelet[1195]: E0507 22:43:16.379999 1195 pod_workers.go:191] Error syncing pod 88c0b410-63d1-4438-992a-1980770e1223 ("coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"9f213444526f7b5d9269b6c74cf97463fb60874c4d19a9e650c4cf5fec8f68fd\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:43:40 false-20210507223341-391940 kubelet[1195]: E0507 22:43:40.367848 1195 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "22f31adf91457e435c78f01a2de778ebe374bc6858a9b11f5fed44a1d0ba4fa5": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:43:40 false-20210507223341-391940 kubelet[1195]: E0507 22:43:40.367911 1195 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "22f31adf91457e435c78f01a2de778ebe374bc6858a9b11f5fed44a1d0ba4fa5": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:43:40 false-20210507223341-391940 kubelet[1195]: E0507 22:43:40.367927 1195 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "22f31adf91457e435c78f01a2de778ebe374bc6858a9b11f5fed44a1d0ba4fa5": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:43:40 false-20210507223341-391940 kubelet[1195]: E0507 22:43:40.367984 1195 pod_workers.go:191] Error syncing pod 88c0b410-63d1-4438-992a-1980770e1223 ("coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"22f31adf91457e435c78f01a2de778ebe374bc6858a9b11f5fed44a1d0ba4fa5\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:44:03 false-20210507223341-391940 kubelet[1195]: E0507 22:44:03.375826 1195 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "c2453cb7ec7e223596fae6f4b4e566579765a10eceb3bd5c01806cbefd8c6b04": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:44:03 false-20210507223341-391940 kubelet[1195]: E0507 22:44:03.375898 1195 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "c2453cb7ec7e223596fae6f4b4e566579765a10eceb3bd5c01806cbefd8c6b04": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:44:03 false-20210507223341-391940 kubelet[1195]: E0507 22:44:03.375914 1195 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "c2453cb7ec7e223596fae6f4b4e566579765a10eceb3bd5c01806cbefd8c6b04": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:44:03 false-20210507223341-391940 kubelet[1195]: E0507 22:44:03.375970 1195 pod_workers.go:191] Error syncing pod 88c0b410-63d1-4438-992a-1980770e1223 ("coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-q8wsb_kube-system(88c0b410-63d1-4438-992a-1980770e1223)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"c2453cb7ec7e223596fae6f4b4e566579765a10eceb3bd5c01806cbefd8c6b04\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" * * ==> storage-provisioner [d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9] <== * I0507 22:34:42.495179 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0507 22:34:42.502718 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0507 22:34:42.502756 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0507 22:34:42.510850 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0507 22:34:42.510998 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_false-20210507223341-391940_cf5fe9d6-de8a-4f18-a56b-f1c089d5ec03! I0507 22:34:42.510996 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8143ecb3-0676-4b23-9e48-b94cc2dc710d", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' false-20210507223341-391940_cf5fe9d6-de8a-4f18-a56b-f1c089d5ec03 became leader I0507 22:34:42.611625 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_false-20210507223341-391940_cf5fe9d6-de8a-4f18-a56b-f1c089d5ec03! -- /stdout -- helpers_test.go:250: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p false-20210507223341-391940 -n false-20210507223341-391940 helpers_test.go:257: (dbg) Run: kubectl --context false-20210507223341-391940 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running helpers_test.go:263: non-running pods: coredns-74ff55c5b-q8wsb helpers_test.go:265: ======> post-mortem[TestNetworkPlugins/group/false]: describe non-running pods <====== helpers_test.go:268: (dbg) Run: kubectl --context false-20210507223341-391940 describe pod coredns-74ff55c5b-q8wsb helpers_test.go:268: (dbg) Non-zero exit: kubectl --context false-20210507223341-391940 describe pod coredns-74ff55c5b-q8wsb: exit status 1 (63.943676ms) ** stderr ** Error from server (NotFound): pods "coredns-74ff55c5b-q8wsb" not found ** /stderr ** helpers_test.go:270: kubectl --context false-20210507223341-391940 describe pod coredns-74ff55c5b-q8wsb: exit status 1 helpers_test.go:171: Cleaning up "false-20210507223341-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p false-20210507223341-391940 helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p false-20210507223341-391940: (2.427550009s) E0507 22:44:58.573230 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/client.crt: no such file or directory E0507 22:44:58.578501 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/client.crt: no such file or directory E0507 22:44:58.588697 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/client.crt: no such file or directory E0507 22:44:58.608934 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/client.crt: no such file or directory E0507 22:44:58.649449 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/client.crt: no such file or directory E0507 22:44:58.729767 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/client.crt: no such file or directory E0507 22:44:58.890510 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/client.crt: no such file or directory E0507 22:44:59.210670 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/client.crt: no such file or directory E0507 22:44:59.597410 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory E0507 22:44:59.850831 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/client.crt: no such file or directory E0507 22:45:01.131225 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/client.crt: no such file or directory E0507 22:45:03.694233 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/client.crt: no such file or directory E0507 22:45:08.814981 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/client.crt: no such file or directory E0507 22:45:12.589732 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/client.crt: no such file or directory E0507 22:45:12.595055 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/client.crt: no such file or directory E0507 22:45:12.605279 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/client.crt: no such file or directory E0507 22:45:12.625503 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/client.crt: no such file or directory E0507 22:45:12.665948 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/client.crt: no such file or directory E0507 22:45:12.746636 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/client.crt: no such file or directory E0507 22:45:12.907303 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/client.crt: no such file or directory E0507 22:45:13.227567 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/client.crt: no such file or directory E0507 22:45:13.868662 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/client.crt: no such file or directory E0507 22:45:15.149620 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/client.crt: no such file or directory E0507 22:45:17.710418 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/client.crt: no such file or directory E0507 22:45:18.629372 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory E0507 22:45:19.055983 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/client.crt: no such file or directory E0507 22:45:22.830599 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/client.crt: no such file or directory E0507 22:45:31.218883 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/enable-default-cni-20210507223814-391940/client.crt: no such file or directory E0507 22:45:31.224137 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/enable-default-cni-20210507223814-391940/client.crt: no such file or directory E0507 22:45:31.234492 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/enable-default-cni-20210507223814-391940/client.crt: no such file or directory E0507 22:45:31.254655 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/enable-default-cni-20210507223814-391940/client.crt: no such file or directory E0507 22:45:31.294976 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/enable-default-cni-20210507223814-391940/client.crt: no such file or directory E0507 22:45:31.375239 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/enable-default-cni-20210507223814-391940/client.crt: no such file or directory E0507 22:45:31.535803 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/enable-default-cni-20210507223814-391940/client.crt: no such file or directory E0507 22:45:31.856523 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/enable-default-cni-20210507223814-391940/client.crt: no such file or directory E0507 22:45:32.496922 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/enable-default-cni-20210507223814-391940/client.crt: no such file or directory E0507 22:45:33.071243 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/client.crt: no such file or directory E0507 22:45:33.777750 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/enable-default-cni-20210507223814-391940/client.crt: no such file or directory E0507 22:45:36.338136 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/enable-default-cni-20210507223814-391940/client.crt: no such file or directory E0507 22:45:39.536984 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/client.crt: no such file or directory E0507 22:45:41.458776 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/enable-default-cni-20210507223814-391940/client.crt: no such file or directory E0507 22:45:46.313639 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory E0507 22:45:51.699093 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/enable-default-cni-20210507223814-391940/client.crt: no such file or directory E0507 22:45:53.552148 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/client.crt: no such file or directory E0507 22:46:12.180148 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/enable-default-cni-20210507223814-391940/client.crt: no such file or directory E0507 22:46:20.498017 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/client.crt: no such file or directory E0507 22:46:34.513300 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/client.crt: no such file or directory E0507 22:46:52.729460 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory E0507 22:46:53.141126 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/enable-default-cni-20210507223814-391940/client.crt: no such file or directory E0507 22:46:59.411999 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory E0507 22:47:09.423147 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory E0507 22:47:15.754316 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory E0507 22:47:19.733976 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kindnet-20210507224017-391940/client.crt: no such file or directory E0507 22:47:19.739234 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kindnet-20210507224017-391940/client.crt: no such file or directory E0507 22:47:19.749445 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kindnet-20210507224017-391940/client.crt: no such file or directory E0507 22:47:19.769757 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kindnet-20210507224017-391940/client.crt: no such file or directory E0507 22:47:19.810070 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kindnet-20210507224017-391940/client.crt: no such file or directory E0507 22:47:19.890402 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kindnet-20210507224017-391940/client.crt: no such file or directory E0507 22:47:20.050952 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kindnet-20210507224017-391940/client.crt: no such file or directory E0507 22:47:20.371586 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kindnet-20210507224017-391940/client.crt: no such file or directory E0507 22:47:21.012037 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kindnet-20210507224017-391940/client.crt: no such file or directory E0507 22:47:22.292376 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kindnet-20210507224017-391940/client.crt: no such file or directory E0507 22:47:24.853331 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kindnet-20210507224017-391940/client.crt: no such file or directory E0507 22:47:29.974395 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kindnet-20210507224017-391940/client.crt: no such file or directory E0507 22:47:40.214571 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kindnet-20210507224017-391940/client.crt: no such file or directory E0507 22:47:40.847660 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory E0507 22:47:42.418278 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/client.crt: no such file or directory E0507 22:47:43.438395 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory E0507 22:47:56.434133 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/client.crt: no such file or directory E0507 22:48:00.695192 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kindnet-20210507224017-391940/client.crt: no such file or directory E0507 22:48:00.823100 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory E0507 22:48:07.638903 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/bridge-20210507224024-391940/client.crt: no such file or directory E0507 22:48:07.644146 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/bridge-20210507224024-391940/client.crt: no such file or directory E0507 22:48:07.654362 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/bridge-20210507224024-391940/client.crt: no such file or directory E0507 22:48:07.674580 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/bridge-20210507224024-391940/client.crt: no such file or directory E0507 22:48:07.714810 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/bridge-20210507224024-391940/client.crt: no such file or directory E0507 22:48:07.795730 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/bridge-20210507224024-391940/client.crt: no such file or directory E0507 22:48:07.956369 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/bridge-20210507224024-391940/client.crt: no such file or directory E0507 22:48:08.276860 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/bridge-20210507224024-391940/client.crt: no such file or directory E0507 22:48:08.917276 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/bridge-20210507224024-391940/client.crt: no such file or directory E0507 22:48:10.197661 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/bridge-20210507224024-391940/client.crt: no such file or directory E0507 22:48:12.758019 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/bridge-20210507224024-391940/client.crt: no such file or directory E0507 22:48:15.061324 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/enable-default-cni-20210507223814-391940/client.crt: no such file or directory E0507 22:48:15.775797 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory E0507 22:48:17.776829 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory E0507 22:48:17.879194 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/bridge-20210507224024-391940/client.crt: no such file or directory E0507 22:48:28.119821 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/bridge-20210507224024-391940/client.crt: no such file or directory E0507 22:48:41.656317 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kindnet-20210507224017-391940/client.crt: no such file or directory E0507 22:48:48.600438 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/bridge-20210507224024-391940/client.crt: no such file or directory E0507 22:49:03.894052 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory E0507 22:49:29.561094 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/bridge-20210507224024-391940/client.crt: no such file or directory E0507 22:49:58.572586 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/client.crt: no such file or directory E0507 22:50:03.576522 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kindnet-20210507224017-391940/client.crt: no such file or directory E0507 22:50:12.589782 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/client.crt: no such file or directory E0507 22:50:18.629823 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory E0507 22:50:26.259023 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/calico-20210507223733-391940/client.crt: no such file or directory E0507 22:50:31.219343 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/enable-default-cni-20210507223814-391940/client.crt: no such file or directory E0507 22:50:40.274770 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/custom-weave-20210507223739-391940/client.crt: no such file or directory E0507 22:50:51.481914 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/bridge-20210507224024-391940/client.crt: no such file or directory E0507 22:50:58.901635 391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/enable-default-cni-20210507223814-391940/client.crt: no such file or directory === CONT TestNetworkPlugins/group/kubenet/Start net_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubenet-20210507224052-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker --container-runtime=containerd: exit status 80 (10m29.68184539s) -- stdout -- * [kubenet-20210507224052-391940] minikube v1.20.0 on Debian 9.13 (kvm/amd64) - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig - MINIKUBE_BIN=out/minikube-linux-amd64 - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube - MINIKUBE_LOCATION=master * Using the docker driver based on user configuration - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities * Starting control plane node kubenet-20210507224052-391940 in cluster kubenet-20210507224052-391940 * Pulling base image ... * Creating docker container (CPUs=2, Memory=2048MB) ... * Preparing Kubernetes v1.20.2 on containerd 1.4.4 ... - Generating certificates and keys ... - Booting up control plane ... - Configuring RBAC rules ... * Verifying Kubernetes components... - Using image gcr.io/k8s-minikube/storage-provisioner:v5 * Enabled addons: storage-provisioner, default-storageclass -- /stdout -- ** stderr ** I0507 22:40:52.878518 672811 out.go:291] Setting OutFile to fd 1 ... I0507 22:40:52.878673 672811 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:40:52.878682 672811 out.go:304] Setting ErrFile to fd 2... I0507 22:40:52.878685 672811 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:40:52.878775 672811 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/bin I0507 22:40:52.879029 672811 out.go:298] Setting JSON to false I0507 22:40:52.914708 672811 start.go:108] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":12020,"bootTime":1620415232,"procs":350,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"} I0507 22:40:52.914791 672811 start.go:118] virtualization: kvm guest I0507 22:40:52.917552 672811 out.go:170] * [kubenet-20210507224052-391940] minikube v1.20.0 on Debian 9.13 (kvm/amd64) I0507 22:40:52.919004 672811 out.go:170] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig I0507 22:40:52.920381 672811 out.go:170] - MINIKUBE_BIN=out/minikube-linux-amd64 I0507 22:40:52.921826 672811 out.go:170] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube I0507 22:40:52.923176 672811 out.go:170] - MINIKUBE_LOCATION=master I0507 22:40:52.923813 672811 driver.go:322] Setting default libvirt URI to qemu:///system I0507 22:40:52.971346 672811 docker.go:119] docker version: linux-19.03.15 I0507 22:40:52.971454 672811 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0507 22:40:53.057850 672811 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:79 SystemTime:2021-05-07 22:40:53.008217117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0507 22:40:53.057941 672811 docker.go:225] overlay module found I0507 22:40:53.060186 672811 out.go:170] * Using the docker driver based on user configuration I0507 22:40:53.060214 672811 start.go:276] selected driver: docker I0507 22:40:53.060222 672811 start.go:718] validating driver "docker" against I0507 22:40:53.060244 672811 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} W0507 22:40:53.060288 672811 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0507 22:40:53.060303 672811 out.go:424] no arguments passed for "! Your cgroup does not allow setting memory.\n" - returning raw string W0507 22:40:53.060323 672811 out.go:235] ! Your cgroup does not allow setting memory. ! Your cgroup does not allow setting memory. W0507 22:40:53.060334 672811 out.go:424] no arguments passed for " - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities\n" - returning raw string I0507 22:40:53.061888 672811 out.go:170] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0507 22:40:53.062981 672811 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0507 22:40:53.161855 672811 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:79 SystemTime:2021-05-07 22:40:53.100417898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0507 22:40:53.162014 672811 start_flags.go:259] no existing cluster config was found, will generate one from the flags I0507 22:40:53.162237 672811 start_flags.go:733] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0507 22:40:53.162265 672811 cni.go:89] network plugin configured as "kubenet", returning disabled I0507 22:40:53.162274 672811 start_flags.go:273] config: {Name:kubenet-20210507224052-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:kubenet-20210507224052-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0507 22:40:53.165187 672811 out.go:170] * Starting control plane node kubenet-20210507224052-391940 in cluster kubenet-20210507224052-391940 I0507 22:40:53.165236 672811 cache.go:111] Beginning downloading kic base image for docker with containerd W0507 22:40:53.165246 672811 out.go:424] no arguments passed for "* Pulling base image ...\n" - returning raw string W0507 22:40:53.165261 672811 out.go:424] no arguments passed for "* Pulling base image ...\n" - returning raw string I0507 22:40:53.166925 672811 out.go:170] * Pulling base image ... I0507 22:40:53.166966 672811 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd I0507 22:40:53.167001 672811 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 I0507 22:40:53.167016 672811 cache.go:54] Caching tarball of preloaded images I0507 22:40:53.167026 672811 image.go:116] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory I0507 22:40:53.167043 672811 preload.go:132] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download I0507 22:40:53.167054 672811 cache.go:57] Finished verifying existence of preloaded tar for v1.20.2 on containerd I0507 22:40:53.167059 672811 image.go:119] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory, skipping pull I0507 22:40:53.167071 672811 cache.go:131] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in cache, skipping pull I0507 22:40:53.167104 672811 image.go:130] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon I0507 22:40:53.167176 672811 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/config.json ... I0507 22:40:53.167206 672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/config.json: {Name:mk6f7d3b17ed614f6ce609cdf1a5d1f675228263 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:40:53.247777 672811 image.go:134] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon, skipping pull I0507 22:40:53.247803 672811 cache.go:155] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in daemon, skipping pull I0507 22:40:53.247832 672811 cache.go:194] Successfully downloaded all kic artifacts I0507 22:40:53.247867 672811 start.go:313] acquiring machines lock for kubenet-20210507224052-391940: {Name:mk343db27c7581f71b72b6b890cfa139aa788b8d Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0507 22:40:53.247996 672811 start.go:317] acquired machines lock for "kubenet-20210507224052-391940" in 107.964µs I0507 22:40:53.248026 672811 start.go:89] Provisioning new machine with config: &{Name:kubenet-20210507224052-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:kubenet-20210507224052-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} I0507 22:40:53.248124 672811 start.go:126] createHost starting for "" (driver="docker") I0507 22:40:53.250851 672811 out.go:197] * Creating docker container (CPUs=2, Memory=2048MB) ... I0507 22:40:53.251111 672811 start.go:160] libmachine.API.Create for "kubenet-20210507224052-391940" (driver="docker") I0507 22:40:53.251145 672811 client.go:168] LocalClient.Create starting I0507 22:40:53.251244 672811 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem I0507 22:40:53.251275 672811 main.go:128] libmachine: Decoding PEM data... I0507 22:40:53.251311 672811 main.go:128] libmachine: Parsing certificate... I0507 22:40:53.251453 672811 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem I0507 22:40:53.251479 672811 main.go:128] libmachine: Decoding PEM data... I0507 22:40:53.251496 672811 main.go:128] libmachine: Parsing certificate... I0507 22:40:53.251894 672811 cli_runner.go:115] Run: docker network inspect kubenet-20210507224052-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0507 22:40:53.291644 672811 cli_runner.go:162] docker network inspect kubenet-20210507224052-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0507 22:40:53.291722 672811 network_create.go:249] running [docker network inspect kubenet-20210507224052-391940] to gather additional debugging logs... I0507 22:40:53.291743 672811 cli_runner.go:115] Run: docker network inspect kubenet-20210507224052-391940 W0507 22:40:53.341497 672811 cli_runner.go:162] docker network inspect kubenet-20210507224052-391940 returned with exit code 1 I0507 22:40:53.341550 672811 network_create.go:252] error running [docker network inspect kubenet-20210507224052-391940]: docker network inspect kubenet-20210507224052-391940: exit status 1 stdout: [] stderr: Error: No such network: kubenet-20210507224052-391940 I0507 22:40:53.341581 672811 network_create.go:254] output of [docker network inspect kubenet-20210507224052-391940]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: kubenet-20210507224052-391940 ** /stderr ** I0507 22:40:53.342256 672811 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0507 22:40:53.385054 672811 network.go:215] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b7a55e9e83b1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:be:99:f6:89}} I0507 22:40:53.386400 672811 network.go:263] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc000374028] misses:0} I0507 22:40:53.386443 672811 network.go:210] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0507 22:40:53.386463 672811 network_create.go:100] attempt to create docker network kubenet-20210507224052-391940 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ... I0507 22:40:53.386518 672811 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20210507224052-391940 I0507 22:40:53.469239 672811 network_create.go:84] docker network kubenet-20210507224052-391940 192.168.58.0/24 created I0507 22:40:53.469289 672811 kic.go:106] calculated static IP "192.168.58.2" for the "kubenet-20210507224052-391940" container I0507 22:40:53.469371 672811 cli_runner.go:115] Run: docker ps -a --format {{.Names}} I0507 22:40:53.510838 672811 cli_runner.go:115] Run: docker volume create kubenet-20210507224052-391940 --label name.minikube.sigs.k8s.io=kubenet-20210507224052-391940 --label created_by.minikube.sigs.k8s.io=true I0507 22:40:53.559162 672811 oci.go:102] Successfully created a docker volume kubenet-20210507224052-391940 I0507 22:40:53.559286 672811 cli_runner.go:115] Run: docker run --rm --name kubenet-20210507224052-391940-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20210507224052-391940 --entrypoint /usr/bin/test -v kubenet-20210507224052-391940:/var gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib I0507 22:40:54.328995 672811 oci.go:106] Successfully prepared a docker volume kubenet-20210507224052-391940 W0507 22:40:54.329069 672811 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0507 22:40:54.329079 672811 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0507 22:40:54.329130 672811 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'" I0507 22:40:54.329143 672811 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd I0507 22:40:54.329178 672811 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 I0507 22:40:54.329192 672811 kic.go:179] Starting extracting preloaded images to volume ... I0507 22:40:54.329240 672811 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20210507224052-391940:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir I0507 22:40:54.427070 672811 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-20210507224052-391940 --name kubenet-20210507224052-391940 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20210507224052-391940 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-20210507224052-391940 --network kubenet-20210507224052-391940 --ip 192.168.58.2 --volume kubenet-20210507224052-391940:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e I0507 22:40:55.043077 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Running}} I0507 22:40:55.107025 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}} I0507 22:40:55.165720 672811 cli_runner.go:115] Run: docker exec kubenet-20210507224052-391940 stat /var/lib/dpkg/alternatives/iptables I0507 22:40:55.317730 672811 oci.go:278] the created container "kubenet-20210507224052-391940" has a running status. I0507 22:40:55.317785 672811 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa... I0507 22:40:55.465459 672811 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0507 22:40:55.874845 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}} I0507 22:40:55.926608 672811 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0507 22:40:55.926628 672811 kic_runner.go:115] Args: [docker exec --privileged kubenet-20210507224052-391940 chown docker:docker /home/docker/.ssh/authorized_keys] I0507 22:40:59.038214 672811 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20210507224052-391940:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir: (4.708855042s) I0507 22:40:59.038245 672811 kic.go:188] duration metric: took 4.709051 seconds to extract preloaded images to volume I0507 22:40:59.038321 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}} I0507 22:40:59.081058 672811 machine.go:88] provisioning docker machine ... I0507 22:40:59.081096 672811 ubuntu.go:169] provisioning hostname "kubenet-20210507224052-391940" I0507 22:40:59.081153 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:40:59.119701 672811 main.go:128] libmachine: Using SSH client type: native I0507 22:40:59.119896 672811 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 33326 } I0507 22:40:59.119916 672811 main.go:128] libmachine: About to run SSH command: sudo hostname kubenet-20210507224052-391940 && echo "kubenet-20210507224052-391940" | sudo tee /etc/hostname I0507 22:40:59.251144 672811 main.go:128] libmachine: SSH cmd err, output: : kubenet-20210507224052-391940 I0507 22:40:59.251212 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:40:59.290133 672811 main.go:128] libmachine: Using SSH client type: native I0507 22:40:59.290316 672811 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 33326 } I0507 22:40:59.290356 672811 main.go:128] libmachine: About to run SSH command: if ! grep -xq '.*\skubenet-20210507224052-391940' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-20210507224052-391940/g' /etc/hosts; else echo '127.0.1.1 kubenet-20210507224052-391940' | sudo tee -a /etc/hosts; fi fi I0507 22:40:59.403817 672811 main.go:128] libmachine: SSH cmd err, output: : I0507 22:40:59.403851 672811 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube} I0507 22:40:59.403874 672811 ubuntu.go:177] setting up certificates I0507 22:40:59.403887 672811 provision.go:83] configureAuth start I0507 22:40:59.403966 672811 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20210507224052-391940 I0507 22:40:59.447361 672811 provision.go:137] copyHostCerts I0507 22:40:59.447423 672811 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem, removing ... I0507 22:40:59.447435 672811 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem I0507 22:40:59.447489 672811 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem (1078 bytes) I0507 22:40:59.447657 672811 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem, removing ... I0507 22:40:59.447677 672811 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem I0507 22:40:59.447707 672811 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem (1123 bytes) I0507 22:40:59.447795 672811 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem, removing ... I0507 22:40:59.447805 672811 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem I0507 22:40:59.447843 672811 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem (1675 bytes) I0507 22:40:59.447895 672811 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem org=jenkins.kubenet-20210507224052-391940 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube kubenet-20210507224052-391940] I0507 22:40:59.852941 672811 provision.go:165] copyRemoteCerts I0507 22:40:59.853012 672811 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0507 22:40:59.853074 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:40:59.896021 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:40:59.978856 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes) I0507 22:40:59.995226 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes) I0507 22:41:00.011913 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0507 22:41:00.027622 672811 provision.go:86] duration metric: configureAuth took 623.719966ms I0507 22:41:00.027644 672811 ubuntu.go:193] setting minikube options for container-runtime I0507 22:41:00.027808 672811 machine.go:91] provisioned docker machine in 946.729843ms I0507 22:41:00.027821 672811 client.go:171] LocalClient.Create took 6.776670216s I0507 22:41:00.027841 672811 start.go:168] duration metric: libmachine.API.Create for "kubenet-20210507224052-391940" took 6.776727752s I0507 22:41:00.027849 672811 start.go:267] post-start starting for "kubenet-20210507224052-391940" (driver="docker") I0507 22:41:00.027855 672811 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0507 22:41:00.027897 672811 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0507 22:41:00.027946 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:41:00.075235 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:41:00.166719 672811 ssh_runner.go:149] Run: cat /etc/os-release I0507 22:41:00.169362 672811 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0507 22:41:00.169391 672811 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0507 22:41:00.169407 672811 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0507 22:41:00.169419 672811 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0507 22:41:00.169433 672811 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/addons for local assets ... I0507 22:41:00.169503 672811 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/files for local assets ... I0507 22:41:00.169623 672811 start.go:270] post-start completed in 141.767397ms I0507 22:41:00.169915 672811 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20210507224052-391940 I0507 22:41:00.210576 672811 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/config.json ... I0507 22:41:00.210783 672811 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0507 22:41:00.210835 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:41:00.247709 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:41:00.327660 672811 start.go:129] duration metric: createHost completed in 7.07952255s I0507 22:41:00.327686 672811 start.go:80] releasing machines lock for "kubenet-20210507224052-391940", held for 7.079675771s I0507 22:41:00.327754 672811 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20210507224052-391940 I0507 22:41:00.367166 672811 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/ I0507 22:41:00.367177 672811 ssh_runner.go:149] Run: systemctl --version I0507 22:41:00.367228 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:41:00.367248 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:41:00.408143 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:41:00.408527 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:41:00.487269 672811 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio I0507 22:41:00.537919 672811 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker I0507 22:41:00.547201 672811 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket I0507 22:41:00.564793 672811 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service I0507 22:41:00.574599 672811 ssh_runner.go:149] Run: sudo systemctl disable docker.socket I0507 22:41:00.638969 672811 ssh_runner.go:149] Run: sudo systemctl mask docker.service I0507 22:41:00.698972 672811 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker I0507 22:41:00.709630 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0507 22:41:00.723315 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjIiCiAgICBzdGF0c19jb2xsZWN0X3BlcmlvZCA9IDEwCiAgICBzeXN0ZW1kX2Nncm91cCA9IGZhbHNlCiAgICBlbmFibGVfdGxzX3N0cmVhbWluZyA9IGZhbHNlCiAgICBtYXhfY29udGFpbmVyX2xvZ19saW5lX3NpemUgPSAxNjM4NAogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgbm9fcGl2b3QgPSB0cnVlCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW50aW1lLnYxLmxpbnV4IgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMubGludXhdCiAgICBzaGltID0gImNvbnRhaW5lcmQtc2hpbSIKICAgIHJ1bnRpbWUgPSAicnVuYyIKICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICBub19zaGltID0gZmFsc2UKICAgIHNoaW1fZGVidWcgPSBmYWxzZQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml" I0507 22:41:00.737455 672811 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0507 22:41:00.744876 672811 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255 stdout: stderr: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory I0507 22:41:00.744933 672811 ssh_runner.go:149] Run: sudo modprobe br_netfilter I0507 22:41:00.753834 672811 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0507 22:41:00.761420 672811 ssh_runner.go:149] Run: sudo systemctl daemon-reload I0507 22:41:00.827226 672811 ssh_runner.go:149] Run: sudo systemctl restart containerd I0507 22:41:00.892592 672811 start.go:368] Will wait 60s for socket path /run/containerd/containerd.sock I0507 22:41:00.892666 672811 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock I0507 22:41:00.896809 672811 start.go:393] Will wait 60s for crictl version I0507 22:41:00.896869 672811 ssh_runner.go:149] Run: sudo crictl version I0507 22:41:00.922312 672811 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1 stdout: stderr: time="2021-05-07T22:41:00Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet" I0507 22:41:11.971610 672811 ssh_runner.go:149] Run: sudo crictl version I0507 22:41:12.042782 672811 start.go:402] Version: 0.1.0 RuntimeName: containerd RuntimeVersion: 1.4.4 RuntimeApiVersion: v1alpha2 I0507 22:41:12.042850 672811 ssh_runner.go:149] Run: containerd --version I0507 22:41:12.066863 672811 out.go:170] * Preparing Kubernetes v1.20.2 on containerd 1.4.4 ... I0507 22:41:12.066969 672811 cli_runner.go:115] Run: docker network inspect kubenet-20210507224052-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0507 22:41:12.105280 672811 ssh_runner.go:149] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts I0507 22:41:12.108647 672811 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0507 22:41:12.117548 672811 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/client.crt I0507 22:41:12.117660 672811 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/client.key I0507 22:41:12.117779 672811 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd I0507 22:41:12.117805 672811 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 I0507 22:41:12.117839 672811 ssh_runner.go:149] Run: sudo crictl images --output json I0507 22:41:12.139675 672811 containerd.go:571] all images are preloaded for containerd runtime. I0507 22:41:12.139694 672811 containerd.go:481] Images already preloaded, skipping extraction I0507 22:41:12.139737 672811 ssh_runner.go:149] Run: sudo crictl images --output json I0507 22:41:12.160780 672811 containerd.go:571] all images are preloaded for containerd runtime. I0507 22:41:12.160799 672811 cache_images.go:74] Images are preloaded, skipping loading I0507 22:41:12.160836 672811 ssh_runner.go:149] Run: sudo crictl info I0507 22:41:12.181806 672811 cni.go:89] network plugin configured as "kubenet", returning disabled I0507 22:41:12.181827 672811 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0507 22:41:12.181838 672811 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-20210507224052-391940 NodeName:kubenet-20210507224052-391940 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0507 22:41:12.181948 672811 kubeadm.go:157] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.58.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /run/containerd/containerd.sock name: "kubenet-20210507224052-391940" kubeletExtraArgs: node-ip: 192.168.58.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.58.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.20.2 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 I0507 22:41:12.182024 672811 kubeadm.go:901] kubelet [Unit] Wants=containerd.service [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubenet-20210507224052-391940 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=kubenet --node-ip=192.168.58.2 --pod-cidr=10.244.0.0/16 --runtime-request-timeout=15m [Install] config: {KubernetesVersion:v1.20.2 ClusterName:kubenet-20210507224052-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0507 22:41:12.182065 672811 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2 I0507 22:41:12.190005 672811 binaries.go:44] Found k8s binaries, skipping transfer I0507 22:41:12.190053 672811 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0507 22:41:12.196524 672811 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (572 bytes) I0507 22:41:12.208112 672811 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0507 22:41:12.219787 672811 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1868 bytes) I0507 22:41:12.234762 672811 ssh_runner.go:149] Run: grep 192.168.58.2 control-plane.minikube.internal$ /etc/hosts I0507 22:41:12.238162 672811 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0507 22:41:12.247659 672811 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940 for IP: 192.168.58.2 I0507 22:41:12.247732 672811 certs.go:171] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.key I0507 22:41:12.247761 672811 certs.go:171] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.key I0507 22:41:12.247864 672811 certs.go:282] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/client.key I0507 22:41:12.247917 672811 certs.go:286] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key.cee25041 I0507 22:41:12.247934 672811 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1] I0507 22:41:12.324253 672811 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt.cee25041 ... I0507 22:41:12.324281 672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt.cee25041: {Name:mk17a9fadc289bdd993cd89cf73f7e42a11db951 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:12.324441 672811 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key.cee25041 ... I0507 22:41:12.324457 672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key.cee25041: {Name:mk4f1b00ef492dfe1e4e53295535dd818e4b8776 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:12.324556 672811 certs.go:297] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt I0507 22:41:12.324624 672811 certs.go:301] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key I0507 22:41:12.324690 672811 certs.go:286] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.key I0507 22:41:12.324704 672811 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.crt with IP's: [] I0507 22:41:12.462717 672811 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.crt ... I0507 22:41:12.462741 672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.crt: {Name:mk3b377543768468ecb5ae6c2ac7692fea50fd9a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:12.462892 672811 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.key ... I0507 22:41:12.462906 672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.key: {Name:mkfe92c524b556c20012d8a91c085ac4bc69ff7a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:12.463104 672811 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940.pem (1338 bytes) W0507 22:41:12.463147 672811 certs.go:357] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940_empty.pem, impossibly tiny 0 bytes I0507 22:41:12.463164 672811 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem (1679 bytes) I0507 22:41:12.463201 672811 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem (1078 bytes) I0507 22:41:12.463240 672811 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem (1123 bytes) I0507 22:41:12.463276 672811 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/key.pem (1675 bytes) I0507 22:41:12.464251 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0507 22:41:12.481245 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0507 22:41:12.549535 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0507 22:41:12.567323 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0507 22:41:12.586572 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0507 22:41:12.605164 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0507 22:41:12.622859 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0507 22:41:12.639720 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0507 22:41:12.659044 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940.pem --> /usr/share/ca-certificates/391940.pem (1338 bytes) I0507 22:41:12.677161 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0507 22:41:12.693007 672811 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0507 22:41:12.704857 672811 ssh_runner.go:149] Run: openssl version I0507 22:41:12.709921 672811 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0507 22:41:12.717584 672811 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0507 22:41:12.720534 672811 certs.go:402] hashing: -rw-r--r-- 1 root root 1111 May 7 21:50 /usr/share/ca-certificates/minikubeCA.pem I0507 22:41:12.720581 672811 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0507 22:41:12.725167 672811 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0507 22:41:12.731804 672811 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391940.pem && ln -fs /usr/share/ca-certificates/391940.pem /etc/ssl/certs/391940.pem" I0507 22:41:12.738661 672811 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/391940.pem I0507 22:41:12.741622 672811 certs.go:402] hashing: -rw-r--r-- 1 root root 1338 May 7 21:57 /usr/share/ca-certificates/391940.pem I0507 22:41:12.741658 672811 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391940.pem I0507 22:41:12.746205 672811 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391940.pem /etc/ssl/certs/51391683.0" I0507 22:41:12.752891 672811 kubeadm.go:381] StartCluster: {Name:kubenet-20210507224052-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:kubenet-20210507224052-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0507 22:41:12.752980 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]} I0507 22:41:12.753082 672811 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system" I0507 22:41:12.775624 672811 cri.go:76] found id: "" I0507 22:41:12.775678 672811 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0507 22:41:12.781880 672811 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0507 22:41:12.788117 672811 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver I0507 22:41:12.788153 672811 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0507 22:41:12.794718 672811 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0507 22:41:12.794764 672811 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" W0507 22:41:29.502582 672811 out.go:424] no arguments passed for " - Generating certificates and keys ..." - returning raw string W0507 22:41:29.502611 672811 out.go:424] no arguments passed for " - Generating certificates and keys ..." - returning raw string I0507 22:41:29.504051 672811 out.go:197] - Generating certificates and keys ... W0507 22:41:29.505275 672811 out.go:424] no arguments passed for " - Booting up control plane ..." - returning raw string W0507 22:41:29.505298 672811 out.go:424] no arguments passed for " - Booting up control plane ..." - returning raw string I0507 22:41:29.506842 672811 out.go:197] - Booting up control plane ... W0507 22:41:29.507828 672811 out.go:424] no arguments passed for " - Configuring RBAC rules ..." - returning raw string W0507 22:41:29.507851 672811 out.go:424] no arguments passed for " - Configuring RBAC rules ..." - returning raw string I0507 22:41:29.509381 672811 out.go:197] - Configuring RBAC rules ... I0507 22:41:29.511102 672811 cni.go:89] network plugin configured as "kubenet", returning disabled I0507 22:41:29.511144 672811 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0507 22:41:29.511202 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:29.511202 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=c31bd57f93d45726e4bd30607374f8c720e70e95 minikube.k8s.io/name=kubenet-20210507224052-391940 minikube.k8s.io/updated_at=2021_05_07T22_41_29_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:36.452032 672811 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (6.940764477s) I0507 22:41:36.452084 672811 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=c31bd57f93d45726e4bd30607374f8c720e70e95 minikube.k8s.io/name=kubenet-20210507224052-391940 minikube.k8s.io/updated_at=2021_05_07T22_41_29_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (6.940777019s) I0507 22:41:36.452120 672811 ssh_runner.go:189] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (6.9409632s) I0507 22:41:36.452130 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:36.452135 672811 ops.go:34] apiserver oom_adj: -16 I0507 22:41:37.133448 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:37.634119 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:38.134120 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:38.633786 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:39.133311 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:39.633524 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:40.134249 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:40.633580 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:41.133642 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:41.633685 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:42.133984 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:42.633334 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:43.133263 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:43.634078 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:44.133696 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:44.633466 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:45.133959 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:45.633643 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:46.133797 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:46.634042 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:47.133888 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:47.634155 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:48.133838 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:48.633584 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:49.134019 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:49.633305 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:50.133859 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:50.634269 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:51.133941 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:51.198474 672811 kubeadm.go:977] duration metric: took 21.687320394s to wait for elevateKubeSystemPrivileges. I0507 22:41:51.198504 672811 kubeadm.go:383] StartCluster complete in 38.445622759s I0507 22:41:51.198526 672811 settings.go:142] acquiring lock: {Name:mkbc12d45ea1a96167acb2e3885011008220fc1e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:51.198634 672811 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig I0507 22:41:51.201538 672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig: {Name:mk53c460e0a047a0806c95f27e36717b9bf9f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:51.718321 672811 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubenet-20210507224052-391940" rescaled to 1 I0507 22:41:51.718369 672811 start.go:201] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} W0507 22:41:51.718401 672811 out.go:424] no arguments passed for "* Verifying Kubernetes components...\n" - returning raw string W0507 22:41:51.718425 672811 out.go:424] no arguments passed for "* Verifying Kubernetes components...\n" - returning raw string I0507 22:41:51.720457 672811 out.go:170] * Verifying Kubernetes components... I0507 22:41:51.720524 672811 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0507 22:41:51.718471 672811 addons.go:328] enableAddons start: toEnable=map[], additional=[] I0507 22:41:51.720595 672811 addons.go:55] Setting storage-provisioner=true in profile "kubenet-20210507224052-391940" I0507 22:41:51.718753 672811 cache.go:108] acquiring lock: {Name:mk66f3ed174a0fda2e3a4fd9a235ceef9553bc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0507 22:41:51.720621 672811 addons.go:55] Setting default-storageclass=true in profile "kubenet-20210507224052-391940" I0507 22:41:51.720638 672811 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubenet-20210507224052-391940" I0507 22:41:51.720676 672811 addons.go:131] Setting addon storage-provisioner=true in "kubenet-20210507224052-391940" W0507 22:41:51.720694 672811 addons.go:140] addon storage-provisioner should already be in state true I0507 22:41:51.720700 672811 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 exists I0507 22:41:51.720716 672811 host.go:66] Checking if "kubenet-20210507224052-391940" exists ... I0507 22:41:51.720721 672811 cache.go:97] cache image "minikube-local-cache-test:functional-20210507215728-391940" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940" took 1.979419ms I0507 22:41:51.720737 672811 cache.go:81] save to tar file minikube-local-cache-test:functional-20210507215728-391940 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 succeeded I0507 22:41:51.720751 672811 cache.go:88] Successfully saved all images to host disk. I0507 22:41:51.721038 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}} I0507 22:41:51.721675 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}} I0507 22:41:51.721703 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}} I0507 22:41:51.740835 672811 node_ready.go:35] waiting up to 5m0s for node "kubenet-20210507224052-391940" to be "Ready" ... I0507 22:41:51.745077 672811 node_ready.go:49] node "kubenet-20210507224052-391940" has status "Ready":"True" I0507 22:41:51.745099 672811 node_ready.go:38] duration metric: took 4.233416ms waiting for node "kubenet-20210507224052-391940" to be "Ready" ... I0507 22:41:51.745110 672811 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0507 22:41:51.756875 672811 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace to be "Ready" ... I0507 22:41:51.783594 672811 addons.go:131] Setting addon default-storageclass=true in "kubenet-20210507224052-391940" W0507 22:41:51.783619 672811 addons.go:140] addon default-storageclass should already be in state true I0507 22:41:51.783637 672811 host.go:66] Checking if "kubenet-20210507224052-391940" exists ... I0507 22:41:51.784146 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}} I0507 22:41:51.788959 672811 ssh_runner.go:149] Run: sudo crictl images --output json I0507 22:41:51.789007 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:41:51.792078 672811 out.go:170] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0507 22:41:51.792203 672811 addons.go:261] installing /etc/kubernetes/addons/storage-provisioner.yaml I0507 22:41:51.792220 672811 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0507 22:41:51.792278 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:41:51.832922 672811 addons.go:261] installing /etc/kubernetes/addons/storageclass.yaml I0507 22:41:51.832950 672811 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0507 22:41:51.833006 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:41:51.843625 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:41:51.848427 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:41:51.881629 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:41:51.945739 672811 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0507 22:41:51.956581 672811 containerd.go:567] couldn't find preloaded image for "docker.io/minikube-local-cache-test:functional-20210507215728-391940". assuming images are not preloaded. I0507 22:41:51.956604 672811 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210507215728-391940] I0507 22:41:51.956650 672811 image.go:320] retrieving image: minikube-local-cache-test:functional-20210507215728-391940 I0507 22:41:51.956698 672811 image.go:326] checking repository: index.docker.io/library/minikube-local-cache-test I0507 22:41:51.972691 672811 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml W0507 22:41:52.183545 672811 image.go:333] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details) I0507 22:41:52.183604 672811 image.go:334] short name: minikube-local-cache-test:functional-20210507215728-391940 I0507 22:41:52.184655 672811 image.go:362] daemon lookup for minikube-local-cache-test:functional-20210507215728-391940: Error response from daemon: reference does not exist W0507 22:41:52.330654 672811 image.go:372] authn lookup for minikube-local-cache-test:functional-20210507215728-391940 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:41:52.347904 672811 out.go:170] * Enabled addons: storage-provisioner, default-storageclass I0507 22:41:52.347937 672811 addons.go:330] enableAddons completed in 629.490714ms I0507 22:41:52.481399 672811 image.go:376] remote lookup for minikube-local-cache-test:functional-20210507215728-391940: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:41:52.481439 672811 image.go:98] error retrieve Image minikube-local-cache-test:functional-20210507215728-391940 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:41:52.481470 672811 cache_images.go:106] "minikube-local-cache-test:functional-20210507215728-391940" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210507215728-391940 I0507 22:41:52.481491 672811 cache_images.go:271] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:52.481574 672811 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:52.485013 672811 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940': No such file or directory I0507 22:41:52.485041 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 (5120 bytes) I0507 22:41:52.502314 672811 containerd.go:267] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:52.502378 672811 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:52.612982 672811 cache_images.go:293] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 from cache I0507 22:41:52.613030 672811 cache_images.go:113] Successfully loaded all cached images I0507 22:41:52.613038 672811 cache_images.go:82] LoadImages completed in 656.425091ms I0507 22:41:52.613050 672811 cache_images.go:252] succeeded pushing to: kubenet-20210507224052-391940 I0507 22:41:52.613059 672811 cache_images.go:253] failed pushing to: I0507 22:41:53.769362 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:41:55.770429 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:41:58.269524 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:00.269630 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:02.269711 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:04.774753 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:07.270739 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:09.770268 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:12.270101 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:14.769932 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:17.269374 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:19.770111 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:22.269528 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:24.769876 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:27.269737 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:29.772420 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:32.269539 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:34.269995 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:36.769591 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:38.770070 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:40.771262 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:43.269652 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:45.769197 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:48.270916 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:50.769708 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:53.270043 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:55.769030 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:57.769122 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:59.769527 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:02.269666 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:04.769578 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:06.769758 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:08.770354 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:10.770498 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:13.271448 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:15.770804 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:18.269214 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:20.269718 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:22.769151 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:24.771659 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:27.269262 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:29.269802 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:31.769488 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:33.769541 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:36.268974 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:38.269261 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:40.270280 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:42.771006 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:45.269345 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:47.768594 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:49.769670 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:52.269433 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:54.769190 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:56.769657 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:59.269644 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:01.269772 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:03.769233 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:05.769576 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:08.269493 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:10.769584 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:12.770143 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:15.269008 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:17.269047 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:19.270021 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:21.270385 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:23.768995 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:25.770177 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:28.268810 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:30.269545 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:32.769848 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:35.269721 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:37.768834 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:39.769004 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:41.769947 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:44.269742 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:46.769632 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:49.269169 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:51.270949 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:53.769304 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:55.769541 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:58.269162 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:00.269690 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:02.769677 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:04.769826 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:06.774522 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:09.269620 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:11.269885 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:13.770049 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:15.772664 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:18.269043 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:20.769936 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:22.770233 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:25.269440 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:27.288248 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:29.770145 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:32.268998 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:34.269736 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:36.769998 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:39.269073 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:41.269867 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:43.769776 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:46.269226 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:48.769817 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:51.269597 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:51.773488 672811 pod_ready.go:81] duration metric: took 4m0.016579269s waiting for pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace to be "Ready" ... E0507 22:45:51.773523 672811 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition I0507 22:45:51.773536 672811 pod_ready.go:78] waiting up to 5m0s for pod "etcd-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ... I0507 22:45:51.777341 672811 pod_ready.go:92] pod "etcd-kubenet-20210507224052-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:45:51.777357 672811 pod_ready.go:81] duration metric: took 3.813085ms waiting for pod "etcd-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ... I0507 22:45:51.777371 672811 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ... I0507 22:45:51.780967 672811 pod_ready.go:92] pod "kube-apiserver-kubenet-20210507224052-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:45:51.780982 672811 pod_ready.go:81] duration metric: took 3.604125ms waiting for pod "kube-apiserver-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ... I0507 22:45:51.780991 672811 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ... I0507 22:45:51.784544 672811 pod_ready.go:92] pod "kube-controller-manager-kubenet-20210507224052-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:45:51.784564 672811 pod_ready.go:81] duration metric: took 3.566966ms waiting for pod "kube-controller-manager-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ... I0507 22:45:51.784576 672811 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-52sqc" in "kube-system" namespace to be "Ready" ... I0507 22:45:52.168404 672811 pod_ready.go:92] pod "kube-proxy-52sqc" in "kube-system" namespace has status "Ready":"True" I0507 22:45:52.168426 672811 pod_ready.go:81] duration metric: took 383.841925ms waiting for pod "kube-proxy-52sqc" in "kube-system" namespace to be "Ready" ... I0507 22:45:52.168441 672811 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ... I0507 22:45:52.567262 672811 pod_ready.go:92] pod "kube-scheduler-kubenet-20210507224052-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:45:52.567285 672811 pod_ready.go:81] duration metric: took 398.834268ms waiting for pod "kube-scheduler-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ... I0507 22:45:52.567296 672811 pod_ready.go:38] duration metric: took 4m0.822169579s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0507 22:45:52.567360 672811 api_server.go:50] waiting for apiserver process to appear ... I0507 22:45:52.567436 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]} I0507 22:45:52.567610 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0507 22:45:52.591474 672811 cri.go:76] found id: "b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56" I0507 22:45:52.591498 672811 cri.go:76] found id: "" I0507 22:45:52.591530 672811 logs.go:270] 1 containers: [b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56] I0507 22:45:52.591595 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:52.594489 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]} I0507 22:45:52.594543 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd I0507 22:45:52.615397 672811 cri.go:76] found id: "9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259" I0507 22:45:52.615416 672811 cri.go:76] found id: "" I0507 22:45:52.615422 672811 logs.go:270] 1 containers: [9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259] I0507 22:45:52.615459 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:52.618174 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]} I0507 22:45:52.618232 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns I0507 22:45:52.638869 672811 cri.go:76] found id: "" I0507 22:45:52.638889 672811 logs.go:270] 0 containers: [] W0507 22:45:52.638895 672811 logs.go:272] No container was found matching "coredns" I0507 22:45:52.638901 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]} I0507 22:45:52.638934 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0507 22:45:52.658993 672811 cri.go:76] found id: "148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac" I0507 22:45:52.659013 672811 cri.go:76] found id: "" I0507 22:45:52.659020 672811 logs.go:270] 1 containers: [148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac] I0507 22:45:52.659065 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:52.661726 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]} I0507 22:45:52.661787 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy I0507 22:45:52.682559 672811 cri.go:76] found id: "75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c" I0507 22:45:52.682577 672811 cri.go:76] found id: "" I0507 22:45:52.682582 672811 logs.go:270] 1 containers: [75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c] I0507 22:45:52.682614 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:52.685304 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]} I0507 22:45:52.685349 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0507 22:45:52.705409 672811 cri.go:76] found id: "" I0507 22:45:52.705430 672811 logs.go:270] 0 containers: [] W0507 22:45:52.705437 672811 logs.go:272] No container was found matching "kubernetes-dashboard" I0507 22:45:52.705444 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]} I0507 22:45:52.705490 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0507 22:45:52.725521 672811 cri.go:76] found id: "b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4" I0507 22:45:52.725549 672811 cri.go:76] found id: "" I0507 22:45:52.725557 672811 logs.go:270] 1 containers: [b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4] I0507 22:45:52.725594 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:52.728131 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]} I0507 22:45:52.728184 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0507 22:45:52.748039 672811 cri.go:76] found id: "fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9" I0507 22:45:52.748057 672811 cri.go:76] found id: "" I0507 22:45:52.748062 672811 logs.go:270] 1 containers: [fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9] I0507 22:45:52.748097 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:52.750638 672811 logs.go:123] Gathering logs for containerd ... I0507 22:45:52.750654 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400" I0507 22:45:52.786622 672811 logs.go:123] Gathering logs for kube-apiserver [b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56] ... I0507 22:45:52.786647 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56" I0507 22:45:52.824066 672811 logs.go:123] Gathering logs for etcd [9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259] ... I0507 22:45:52.824090 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259" I0507 22:45:52.848548 672811 logs.go:123] Gathering logs for storage-provisioner [b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4] ... I0507 22:45:52.848571 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4" I0507 22:45:52.869909 672811 logs.go:123] Gathering logs for kube-scheduler [148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac] ... I0507 22:45:52.869930 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac" I0507 22:45:52.894365 672811 logs.go:123] Gathering logs for kube-proxy [75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c] ... I0507 22:45:52.894389 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c" I0507 22:45:52.915404 672811 logs.go:123] Gathering logs for kube-controller-manager [fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9] ... I0507 22:45:52.915425 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9" I0507 22:45:52.952429 672811 logs.go:123] Gathering logs for container status ... I0507 22:45:52.952458 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0507 22:45:52.976316 672811 logs.go:123] Gathering logs for kubelet ... I0507 22:45:52.976343 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0507 22:45:53.036653 672811 logs.go:123] Gathering logs for dmesg ... I0507 22:45:53.036690 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0507 22:45:53.057944 672811 logs.go:123] Gathering logs for describe nodes ... I0507 22:45:53.057967 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0507 22:45:55.641264 672811 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0507 22:45:55.660478 672811 api_server.go:70] duration metric: took 4m3.942074808s to wait for apiserver process to appear ... I0507 22:45:55.660507 672811 api_server.go:86] waiting for apiserver healthz status ... I0507 22:45:55.660536 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]} I0507 22:45:55.660583 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0507 22:45:55.681645 672811 cri.go:76] found id: "b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56" I0507 22:45:55.681673 672811 cri.go:76] found id: "" I0507 22:45:55.681680 672811 logs.go:270] 1 containers: [b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56] I0507 22:45:55.681720 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:55.684913 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]} I0507 22:45:55.684970 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd I0507 22:45:55.705493 672811 cri.go:76] found id: "9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259" I0507 22:45:55.705512 672811 cri.go:76] found id: "" I0507 22:45:55.705520 672811 logs.go:270] 1 containers: [9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259] I0507 22:45:55.705566 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:55.708189 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]} I0507 22:45:55.708243 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns I0507 22:45:55.728489 672811 cri.go:76] found id: "" I0507 22:45:55.728507 672811 logs.go:270] 0 containers: [] W0507 22:45:55.728513 672811 logs.go:272] No container was found matching "coredns" I0507 22:45:55.728520 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]} I0507 22:45:55.728577 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0507 22:45:55.748870 672811 cri.go:76] found id: "148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac" I0507 22:45:55.748891 672811 cri.go:76] found id: "" I0507 22:45:55.748897 672811 logs.go:270] 1 containers: [148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac] I0507 22:45:55.748931 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:55.751528 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]} I0507 22:45:55.751588 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy I0507 22:45:55.771423 672811 cri.go:76] found id: "75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c" I0507 22:45:55.771447 672811 cri.go:76] found id: "" I0507 22:45:55.771454 672811 logs.go:270] 1 containers: [75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c] I0507 22:45:55.771493 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:55.774059 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]} I0507 22:45:55.774100 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0507 22:45:55.793936 672811 cri.go:76] found id: "" I0507 22:45:55.793955 672811 logs.go:270] 0 containers: [] W0507 22:45:55.793962 672811 logs.go:272] No container was found matching "kubernetes-dashboard" I0507 22:45:55.793968 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]} I0507 22:45:55.794010 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0507 22:45:55.814066 672811 cri.go:76] found id: "b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4" I0507 22:45:55.814087 672811 cri.go:76] found id: "" I0507 22:45:55.814094 672811 logs.go:270] 1 containers: [b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4] I0507 22:45:55.814132 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:55.816677 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]} I0507 22:45:55.816729 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0507 22:45:55.836707 672811 cri.go:76] found id: "fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9" I0507 22:45:55.836735 672811 cri.go:76] found id: "" I0507 22:45:55.836743 672811 logs.go:270] 1 containers: [fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9] I0507 22:45:55.836785 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:55.839333 672811 logs.go:123] Gathering logs for describe nodes ... I0507 22:45:55.839356 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0507 22:45:55.924686 672811 logs.go:123] Gathering logs for kube-apiserver [b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56] ... I0507 22:45:55.924720 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56" I0507 22:45:55.962145 672811 logs.go:123] Gathering logs for containerd ... I0507 22:45:55.962173 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400" I0507 22:45:56.001877 672811 logs.go:123] Gathering logs for kubelet ... I0507 22:45:56.001906 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0507 22:45:56.066223 672811 logs.go:123] Gathering logs for dmesg ... I0507 22:45:56.066252 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0507 22:45:56.087631 672811 logs.go:123] Gathering logs for etcd [9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259] ... I0507 22:45:56.087656 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259" I0507 22:45:56.113575 672811 logs.go:123] Gathering logs for kube-scheduler [148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac] ... I0507 22:45:56.113600 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac" I0507 22:45:56.139177 672811 logs.go:123] Gathering logs for kube-proxy [75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c] ... I0507 22:45:56.139205 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c" I0507 22:45:56.160446 672811 logs.go:123] Gathering logs for storage-provisioner [b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4] ... I0507 22:45:56.160467 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4" I0507 22:45:56.181281 672811 logs.go:123] Gathering logs for kube-controller-manager [fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9] ... I0507 22:45:56.181304 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9" I0507 22:45:56.215392 672811 logs.go:123] Gathering logs for container status ... I0507 22:45:56.215415 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0507 22:45:58.739121 672811 api_server.go:223] Checking apiserver healthz at https://192.168.58.2:8443/healthz ... I0507 22:45:58.747973 672811 api_server.go:249] https://192.168.58.2:8443/healthz returned 200: ok I0507 22:45:58.748915 672811 api_server.go:139] control plane version: v1.20.2 I0507 22:45:58.748937 672811 api_server.go:129] duration metric: took 3.088423463s to wait for apiserver health ... I0507 22:45:58.748946 672811 system_pods.go:43] waiting for kube-system pods to appear ... I0507 22:45:58.748968 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]} I0507 22:45:58.749014 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0507 22:45:58.772016 672811 cri.go:76] found id: "b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56" I0507 22:45:58.772034 672811 cri.go:76] found id: "" I0507 22:45:58.772041 672811 logs.go:270] 1 containers: [b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56] I0507 22:45:58.772081 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:58.774963 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]} I0507 22:45:58.775021 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd I0507 22:45:58.796011 672811 cri.go:76] found id: "9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259" I0507 22:45:58.796030 672811 cri.go:76] found id: "" I0507 22:45:58.796038 672811 logs.go:270] 1 containers: [9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259] I0507 22:45:58.796077 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:58.798611 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]} I0507 22:45:58.798654 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns I0507 22:45:58.819119 672811 cri.go:76] found id: "" I0507 22:45:58.819141 672811 logs.go:270] 0 containers: [] W0507 22:45:58.819148 672811 logs.go:272] No container was found matching "coredns" I0507 22:45:58.819155 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]} I0507 22:45:58.819199 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0507 22:45:58.838941 672811 cri.go:76] found id: "148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac" I0507 22:45:58.838959 672811 cri.go:76] found id: "" I0507 22:45:58.838964 672811 logs.go:270] 1 containers: [148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac] I0507 22:45:58.839011 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:58.841577 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]} I0507 22:45:58.841630 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy I0507 22:45:58.862008 672811 cri.go:76] found id: "75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c" I0507 22:45:58.862038 672811 cri.go:76] found id: "" I0507 22:45:58.862046 672811 logs.go:270] 1 containers: [75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c] I0507 22:45:58.862086 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:58.864678 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]} I0507 22:45:58.864729 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0507 22:45:58.884659 672811 cri.go:76] found id: "" I0507 22:45:58.884673 672811 logs.go:270] 0 containers: [] W0507 22:45:58.884678 672811 logs.go:272] No container was found matching "kubernetes-dashboard" I0507 22:45:58.884685 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]} I0507 22:45:58.884728 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0507 22:45:58.904618 672811 cri.go:76] found id: "b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4" I0507 22:45:58.904641 672811 cri.go:76] found id: "" I0507 22:45:58.904648 672811 logs.go:270] 1 containers: [b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4] I0507 22:45:58.904679 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:58.907292 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]} I0507 22:45:58.907336 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0507 22:45:58.927242 672811 cri.go:76] found id: "fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9" I0507 22:45:58.927257 672811 cri.go:76] found id: "" I0507 22:45:58.927262 672811 logs.go:270] 1 containers: [fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9] I0507 22:45:58.927292 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:58.929833 672811 logs.go:123] Gathering logs for kube-scheduler [148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac] ... I0507 22:45:58.929851 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac" I0507 22:45:58.952999 672811 logs.go:123] Gathering logs for kube-proxy [75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c] ... I0507 22:45:58.953020 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c" I0507 22:45:58.974611 672811 logs.go:123] Gathering logs for container status ... I0507 22:45:58.974637 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0507 22:45:58.997315 672811 logs.go:123] Gathering logs for kubelet ... I0507 22:45:58.997340 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0507 22:45:59.057902 672811 logs.go:123] Gathering logs for dmesg ... I0507 22:45:59.057927 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0507 22:45:59.079247 672811 logs.go:123] Gathering logs for describe nodes ... I0507 22:45:59.079269 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0507 22:45:59.161719 672811 logs.go:123] Gathering logs for kube-apiserver [b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56] ... I0507 22:45:59.161753 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56" I0507 22:45:59.199262 672811 logs.go:123] Gathering logs for etcd [9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259] ... I0507 22:45:59.199288 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259" I0507 22:45:59.224956 672811 logs.go:123] Gathering logs for storage-provisioner [b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4] ... I0507 22:45:59.224982 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4" I0507 22:45:59.246819 672811 logs.go:123] Gathering logs for kube-controller-manager [fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9] ... I0507 22:45:59.246842 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9" I0507 22:45:59.283861 672811 logs.go:123] Gathering logs for containerd ... I0507 22:45:59.283890 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400" I0507 22:46:01.824624 672811 system_pods.go:59] 7 kube-system pods found I0507 22:46:01.824672 672811 system_pods.go:61] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:01.824678 672811 system_pods.go:61] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:01.824684 672811 system_pods.go:61] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:01.824689 672811 system_pods.go:61] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:01.824695 672811 system_pods.go:61] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:01.824699 672811 system_pods.go:61] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:01.824704 672811 system_pods.go:61] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:01.824709 672811 system_pods.go:74] duration metric: took 3.075758667s to wait for pod list to return data ... I0507 22:46:01.824722 672811 default_sa.go:34] waiting for default service account to be created ... I0507 22:46:01.826962 672811 default_sa.go:45] found service account: "default" I0507 22:46:01.826987 672811 default_sa.go:55] duration metric: took 2.259407ms for default service account to be created ... I0507 22:46:01.826995 672811 system_pods.go:116] waiting for k8s-apps to be running ... I0507 22:46:01.830985 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:01.831020 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:01.831030 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:01.831039 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:01.831047 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:01.831074 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:01.831081 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:01.831086 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:01.831099 672811 retry.go:31] will retry after 305.063636ms: missing components: kube-dns I0507 22:46:02.140549 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:02.140579 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:02.140585 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:02.140593 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:02.140600 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:02.140608 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:02.140614 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:02.140621 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:02.140634 672811 retry.go:31] will retry after 338.212508ms: missing components: kube-dns I0507 22:46:02.483304 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:02.483338 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:02.483345 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:02.483351 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:02.483355 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:02.483359 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:02.483364 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:02.483367 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:02.483378 672811 retry.go:31] will retry after 378.459802ms: missing components: kube-dns I0507 22:46:02.867187 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:02.867218 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:02.867226 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:02.867234 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:02.867241 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:02.867250 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:02.867258 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:02.867264 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:02.867277 672811 retry.go:31] will retry after 469.882201ms: missing components: kube-dns I0507 22:46:03.341758 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:03.341789 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:03.341795 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:03.341801 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:03.341806 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:03.341810 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:03.341814 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:03.341817 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:03.341828 672811 retry.go:31] will retry after 667.365439ms: missing components: kube-dns I0507 22:46:04.013373 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:04.013405 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:04.013411 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:04.013417 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:04.013422 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:04.013425 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:04.013430 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:04.013433 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:04.013443 672811 retry.go:31] will retry after 597.243124ms: missing components: kube-dns I0507 22:46:04.615326 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:04.615358 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:04.615366 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:04.615375 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:04.615386 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:04.615398 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:04.615403 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:04.615410 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:04.615422 672811 retry.go:31] will retry after 789.889932ms: missing components: kube-dns I0507 22:46:05.411070 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:05.411103 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:05.411109 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:05.411115 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:05.411120 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:05.411124 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:05.411128 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:05.411134 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:05.411145 672811 retry.go:31] will retry after 951.868007ms: missing components: kube-dns I0507 22:46:06.367954 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:06.367985 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:06.367994 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:06.368003 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:06.368008 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:06.368012 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:06.368016 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:06.368022 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:06.368033 672811 retry.go:31] will retry after 1.341783893s: missing components: kube-dns I0507 22:46:07.715243 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:07.715278 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:07.715284 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:07.715290 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:07.715294 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:07.715299 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:07.715303 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:07.715307 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:07.715318 672811 retry.go:31] will retry after 1.876813009s: missing components: kube-dns I0507 22:46:09.596846 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:09.596877 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:09.596883 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:09.596889 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:09.596894 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:09.596898 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:09.596902 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:09.596908 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:09.596919 672811 retry.go:31] will retry after 2.6934314s: missing components: kube-dns I0507 22:46:12.295432 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:12.295467 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:12.295473 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:12.295479 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:12.295484 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:12.295488 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:12.295492 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:12.295496 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:12.295535 672811 retry.go:31] will retry after 2.494582248s: missing components: kube-dns I0507 22:46:14.802279 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:14.802312 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:14.802319 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:14.802328 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:14.802332 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:14.802338 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:14.802347 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:14.802351 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:14.802365 672811 retry.go:31] will retry after 3.420895489s: missing components: kube-dns I0507 22:46:18.228571 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:18.228606 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:18.228614 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:18.228620 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:18.228625 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:18.228629 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:18.228634 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:18.228641 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:18.228690 672811 retry.go:31] will retry after 4.133785681s: missing components: kube-dns I0507 22:46:22.368039 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:22.368077 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:22.368083 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:22.368090 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:22.368094 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:22.368099 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:22.368104 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:22.368110 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:22.368123 672811 retry.go:31] will retry after 5.595921491s: missing components: kube-dns I0507 22:46:27.968419 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:27.968457 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:27.968468 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:27.968478 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:27.968485 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:27.968491 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:27.968500 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:27.968506 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:27.968522 672811 retry.go:31] will retry after 6.3346098s: missing components: kube-dns I0507 22:46:34.308467 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:34.308500 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:34.308506 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:34.308513 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:34.308517 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:34.308521 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:34.308525 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:34.308529 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:34.308550 672811 retry.go:31] will retry after 7.962971847s: missing components: kube-dns I0507 22:46:42.276615 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:42.276650 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:42.276658 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:42.276674 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:42.276682 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:42.276692 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:42.276702 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:42.276711 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:42.276728 672811 retry.go:31] will retry after 12.096349863s: missing components: kube-dns I0507 22:46:54.377899 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:54.377933 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:54.377939 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:54.377945 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:54.377950 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:54.377954 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:54.377959 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:54.377962 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:54.377976 672811 retry.go:31] will retry after 11.924857264s: missing components: kube-dns I0507 22:47:06.308089 672811 system_pods.go:86] 7 kube-system pods found I0507 22:47:06.308137 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:47:06.308147 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:47:06.308156 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:47:06.308169 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:47:06.308181 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:47:06.308189 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:47:06.308195 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:47:06.308215 672811 retry.go:31] will retry after 14.772791249s: missing components: kube-dns I0507 22:47:21.085968 672811 system_pods.go:86] 7 kube-system pods found I0507 22:47:21.086010 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:47:21.086021 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:47:21.086030 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:47:21.086040 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:47:21.086054 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:47:21.086061 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:47:21.086068 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:47:21.086093 672811 retry.go:31] will retry after 20.175608267s: missing components: kube-dns I0507 22:47:41.266530 672811 system_pods.go:86] 7 kube-system pods found I0507 22:47:41.266567 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:47:41.266575 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:47:41.266583 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:47:41.266587 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:47:41.266592 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:47:41.266596 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:47:41.266600 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:47:41.266611 672811 retry.go:31] will retry after 28.062855718s: missing components: kube-dns I0507 22:48:09.334307 672811 system_pods.go:86] 7 kube-system pods found I0507 22:48:09.334345 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:48:09.334354 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:48:09.334362 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:48:09.334369 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:48:09.334378 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:48:09.334385 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:48:09.334392 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:48:09.334407 672811 retry.go:31] will retry after 40.022161579s: missing components: kube-dns I0507 22:48:49.361787 672811 system_pods.go:86] 7 kube-system pods found I0507 22:48:49.361828 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:48:49.361835 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:48:49.361841 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:48:49.361846 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:48:49.361849 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:48:49.361856 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:48:49.361860 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:48:49.361874 672811 retry.go:31] will retry after 37.970670965s: missing components: kube-dns I0507 22:49:27.337225 672811 system_pods.go:86] 7 kube-system pods found I0507 22:49:27.337262 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:49:27.337269 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:49:27.337276 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:49:27.337280 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:49:27.337284 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:49:27.337289 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:49:27.337292 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:49:27.337304 672811 retry.go:31] will retry after 47.568379235s: missing components: kube-dns I0507 22:50:14.911358 672811 system_pods.go:86] 7 kube-system pods found I0507 22:50:14.911396 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:50:14.911404 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:50:14.911411 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:50:14.911415 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:50:14.911419 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:50:14.911423 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:50:14.911428 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:50:14.911439 672811 retry.go:31] will retry after 1m7.577191067s: missing components: kube-dns I0507 22:51:22.494081 672811 system_pods.go:86] 7 kube-system pods found I0507 22:51:22.494122 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:51:22.494130 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:51:22.494136 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:51:22.494141 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:51:22.494144 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:51:22.494148 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:51:22.494153 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:51:22.496731 672811 out.go:170] W0507 22:51:22.496964 672811 out.go:235] X Exiting due to GUEST_START: wait 5m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns X Exiting due to GUEST_START: wait 5m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns W0507 22:51:22.496978 672811 out.go:424] no arguments passed for "* \n" - returning raw string W0507 22:51:22.496984 672811 out.go:235] * * W0507 22:51:22.496995 672811 out.go:424] no arguments passed for "* If the above advice does not help, please let us know:\n" - returning raw string W0507 22:51:22.497001 672811 out.go:424] no arguments passed for " https://github.com/kubernetes/minikube/issues/new/choose\n\n" - returning raw string W0507 22:51:22.497005 672811 out.go:424] no arguments passed for "* Please attach the following file to the GitHub issue:\n" - returning raw string W0507 22:51:22.497050 672811 out.go:424] no arguments passed for "* If the above advice does not help, please let us know:\n https://github.com/kubernetes/minikube/issues/new/choose\n\n* Please attach the following file to the GitHub issue:\n* - /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/logs/lastStart.txt\n\n" - returning raw string W0507 22:51:22.498864 672811 out.go:235] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ W0507 22:51:22.498879 672811 out.go:235] │ │ │ │ W0507 22:51:22.498885 672811 out.go:235] │ * If the above advice does not help, please let us know: │ │ * If the above advice does not help, please let us know: │ W0507 22:51:22.498891 672811 out.go:235] │ https://github.com/kubernetes/minikube/issues/new/choose │ │ https://github.com/kubernetes/minikube/issues/new/choose │ W0507 22:51:22.498898 672811 out.go:235] │ │ │ │ W0507 22:51:22.498906 672811 out.go:235] │ * Please attach the following file to the GitHub issue: │ │ * Please attach the following file to the GitHub issue: │ W0507 22:51:22.498917 672811 out.go:235] │ * - /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/logs/lastStart.txt │ │ * - /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/logs/lastStart.txt │ W0507 22:51:22.498930 672811 out.go:235] │ │ │ │ W0507 22:51:22.498941 672811 out.go:235] ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ W0507 22:51:22.498954 672811 out.go:235] I0507 22:51:22.500255 672811 out.go:170] ** /stderr ** net_test.go:85: failed start: exit status 80 === CONT TestNetworkPlugins/group/kubenet net_test.go:192: "kubenet" test finished in 30m48.479861024s, failed=true net_test.go:193: *** TestNetworkPlugins/group/kubenet FAILED at 2021-05-07 22:51:22.537551729 +0000 UTC m=+3716.411783606 helpers_test.go:218: -----------------------post-mortem-------------------------------- helpers_test.go:226: ======> post-mortem[TestNetworkPlugins/group/kubenet]: docker inspect <====== helpers_test.go:227: (dbg) Run: docker inspect kubenet-20210507224052-391940 helpers_test.go:231: (dbg) docker inspect kubenet-20210507224052-391940: -- stdout -- [ { "Id": "9896eef2111e00814ad010a6c6c7cc4a86e518c2454537d8ae17fe67e98977c2", "Created": "2021-05-07T22:40:54.476446534Z", "Path": "/usr/local/bin/entrypoint", "Args": [ "/sbin/init" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 673801, "ExitCode": 0, "Error": "", "StartedAt": "2021-05-07T22:40:55.033926587Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:bcd131522525c9c3b8695a8d144be8d177bcd5614ec5397f188115d3be0bbc24", "ResolvConfPath": "/var/lib/docker/containers/9896eef2111e00814ad010a6c6c7cc4a86e518c2454537d8ae17fe67e98977c2/resolv.conf", "HostnamePath": "/var/lib/docker/containers/9896eef2111e00814ad010a6c6c7cc4a86e518c2454537d8ae17fe67e98977c2/hostname", "HostsPath": "/var/lib/docker/containers/9896eef2111e00814ad010a6c6c7cc4a86e518c2454537d8ae17fe67e98977c2/hosts", "LogPath": "/var/lib/docker/containers/9896eef2111e00814ad010a6c6c7cc4a86e518c2454537d8ae17fe67e98977c2/9896eef2111e00814ad010a6c6c7cc4a86e518c2454537d8ae17fe67e98977c2-json.log", "Name": "/kubenet-20210507224052-391940", "RestartCount": 0, "Driver": "overlay2", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "", "ExecIDs": null, "HostConfig": { "Binds": [ "/lib/modules:/lib/modules:ro", "kubenet-20210507224052-391940:/var" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "kubenet-20210507224052-391940", "PortBindings": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "" } ] }, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "Capabilities": null, "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "private", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": true, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "seccomp=unconfined", "apparmor=unconfined", "label=disable" ], "Tmpfs": { "/run": "", "/tmp": "" }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 2000000000, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [], "DeviceCgroupRules": null, "DeviceRequests": null, "KernelMemory": 0, "KernelMemoryTCP": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": null, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": null, "ReadonlyPaths": null }, "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/be251d6ef0381781a058cf349dd9a6dd9ac41de5dba47f52c6113ca65be1f889-init/diff:/var/lib/docker/overlay2/1e5fa0ed3c3f4bec9b97cabd8aaa709f5915b54c42d527ba46e8ffa9ebcb7f9a/diff:/var/lib/docker/overlay2/00098e5ff94787f022c282488f937bf3694bcc2f80e6f324f2cb94189fadc609/diff:/var/lib/docker/overlay2/0751219afdacf9c8a75fced952b1ad013a8d5b6fbee07adc96e9f305877d0131/diff:/var/lib/docker/overlay2/4fed3d3ec94e4b275966ac815cabeee3572325ca655dcb69e8d31d2051468a10/diff:/var/lib/docker/overlay2/a78b251d86ddd3460876cbc21fef7421c2e76ba3f3198b79f3af7fe8092297f6/diff:/var/lib/docker/overlay2/f3609509e8e931753320e2da77988a3cdd78a58c167b428b96a3aa29971edb5e/diff:/var/lib/docker/overlay2/ebeb53c34330c6713e55bb0d98076f6618884e3bdcd6b888ad1965c69f65b14d/diff:/var/lib/docker/overlay2/1efdecf3c4a2226dd59cc51906581e2326beec3a6b7090c09e437b80c90794b0/diff:/var/lib/docker/overlay2/4c7309d0146fa644c2eb195cb344f6b10894237fb65248ee8391d1790ac7f765/diff:/var/lib/docker/overlay2/424a19d5d18bedf5b29c5b9ffd2c72e8c9e112f2fd414acd046bfa963d0526c7/diff:/var/lib/docker/overlay2/1846dd5e13995c56277d370ac401df36ad796851e8f2315dfab9ff02f487b8fc/diff:/var/lib/docker/overlay2/9393786bec1ad7d470bbbb5c7a94ec2131900fa0c6d2ad39b1039fc6795a2683/diff:/var/lib/docker/overlay2/708ff6a0ffe352ea29dabc0c453ebb09ccede3e24ae9f3fb51e06680ed43e597/diff:/var/lib/docker/overlay2/5a536ba767666ddc007ad059bfa077204239088ff6093831b1b5a0aff36a88ea/diff:/var/lib/docker/overlay2/1d4b0ac5e44186da0f4ee859bb5c23df30087789d88e253dfd57e0ffb21bb88c/diff:/var/lib/docker/overlay2/2b67d6a3428317a2f483420befe919fd660743c5f1494d075867507afe929344/diff:/var/lib/docker/overlay2/abef0f23a7f068f22910d10fcf3ed65c4804f84a4a9aa126a6ac79666f87ab63/diff:/var/lib/docker/overlay2/ec0c450f32e0e573b78fc8537f87456c96a10f353e8bb6e28b4cde51d4b78237/diff:/var/lib/docker/overlay2/ba3b904a6ce3d016a1ef237a88f0e5d4d3b08a8c68e6e4c808b54ffb59e19ee3/diff:/var/lib/docker/overlay2/160d3a3a918b002bb27e1f108db150483cfb4c1383ab9bea5f7d5b983af0f57f/diff:/var/lib/docker/overlay2/ed771b935b96f93ce682cdd9d22155225a918436de84fb5d56eb6214e36d7e27/diff:/var/lib/docker/overlay2/a298f74d3f51b9716985e7c6a84a4fe16a9badceeb4fbcc5847e9313a496c203/diff:/var/lib/docker/overlay2/7f4ddade1e222fcfd5747b07b270a54575ecfdbdf23dc72c6aa8984cb14b4f6b/diff:/var/lib/docker/overlay2/8522467e2a2b9517f0e9fe828bf20d40830fb4364323ea1b17c1ae43e68f1633/diff:/var/lib/docker/overlay2/7b8ac1e2dcffd2cd29a0fe315f23ba717abac176d21484016b19e33e1ceb3f15/diff:/var/lib/docker/overlay2/219fbaff646669aefdda08db39e5c449632d42e036ba372e6fbfd2e74d05895c/diff:/var/lib/docker/overlay2/169017ab906e8cd6c768272fbbd27db4564b7ea84520773194f7b8d1c5725ce4/diff:/var/lib/docker/overlay2/3f2355256f7a67382c67f2079a79f9a3568cd4aac75dcb8e549d040ea3e3801c/diff:/var/lib/docker/overlay2/049eedb4ea37711e06782dfa1648c66d0e215e8b8eb540da6bd9b7729e88b4c6/diff:/var/lib/docker/overlay2/685ece42c012e8b988affc555e627ea46a42003f7fb6511dc68fb9da6c515fd8/diff:/var/lib/docker/overlay2/224f8f237d1ebeb57711074d5b9338b377abc164e67d85cd8b48264062798e8a/diff:/var/lib/docker/overlay2/280191c44865a7db266046c55f36cee27c985b893bca0a97310569a5df684c8a/diff:/var/lib/docker/overlay2/2a04e90c25bcb0264edd485b59f54c8e6c28a2d0c63f696590f1876b164e0ad8/diff:/var/lib/docker/overlay2/9c5536844b05a6fcc7c6de17ba2cd59669716e44474ac06421119d86c04f197e/diff:/var/lib/docker/overlay2/0db732ad07139625742260350f06f46f9978ae313af26f4afdab09884382542c/diff:/var/lib/docker/overlay2/d7e4510c4ab4dcfcd652b63a086da8e4f53866cf61cc72dfacd6e24a7ba895ac/diff", "MergedDir": "/var/lib/docker/overlay2/be251d6ef0381781a058cf349dd9a6dd9ac41de5dba47f52c6113ca65be1f889/merged", "UpperDir": "/var/lib/docker/overlay2/be251d6ef0381781a058cf349dd9a6dd9ac41de5dba47f52c6113ca65be1f889/diff", "WorkDir": "/var/lib/docker/overlay2/be251d6ef0381781a058cf349dd9a6dd9ac41de5dba47f52c6113ca65be1f889/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "volume", "Name": "kubenet-20210507224052-391940", "Source": "/var/lib/docker/volumes/kubenet-20210507224052-391940/_data", "Destination": "/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" }, { "Type": "bind", "Source": "/lib/modules", "Destination": "/lib/modules", "Mode": "ro", "RW": false, "Propagation": "rprivate" } ], "Config": { "Hostname": "kubenet-20210507224052-391940", "Domainname": "", "User": "root", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "22/tcp": {}, "2376/tcp": {}, "32443/tcp": {}, "5000/tcp": {}, "8443/tcp": {} }, "Tty": true, "OpenStdin": false, "StdinOnce": false, "Env": [ "container=docker", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" ], "Cmd": null, "Image": "gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/usr/local/bin/entrypoint", "/sbin/init" ], "OnBuild": null, "Labels": { "created_by.minikube.sigs.k8s.io": "true", "mode.minikube.sigs.k8s.io": "kubenet-20210507224052-391940", "name.minikube.sigs.k8s.io": "kubenet-20210507224052-391940", "role.minikube.sigs.k8s.io": "" }, "StopSignal": "SIGRTMIN+3" }, "NetworkSettings": { "Bridge": "", "SandboxID": "56ded832994a2b524d39ff9d668d792a8010437bcbaf26b51d56405c4b2b1b61", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "22/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33326" } ], "2376/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33325" } ], "32443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33322" } ], "5000/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33324" } ], "8443/tcp": [ { "HostIp": "127.0.0.1", "HostPort": "33323" } ] }, "SandboxKey": "/var/run/docker/netns/56ded832994a", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "kubenet-20210507224052-391940": { "IPAMConfig": { "IPv4Address": "192.168.58.2" }, "Links": null, "Aliases": [ "9896eef2111e" ], "NetworkID": "05bd40befec940de02d6e92454d4c2ae059b5b2da2404f70d183f81e3e8c2eb2", "EndpointID": "6a5963f03c9af28e63680a1e821ab78a428968c155b18de737b07939cf598447", "Gateway": "192.168.58.1", "IPAddress": "192.168.58.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:3a:02", "DriverOpts": null } } } } ] -- /stdout -- helpers_test.go:235: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p kubenet-20210507224052-391940 -n kubenet-20210507224052-391940 helpers_test.go:240: <<< TestNetworkPlugins/group/kubenet FAILED: start of post-mortem logs <<< helpers_test.go:241: ======> post-mortem[TestNetworkPlugins/group/kubenet]: minikube logs <====== helpers_test.go:243: (dbg) Run: out/minikube-linux-amd64 -p kubenet-20210507224052-391940 logs -n 25 helpers_test.go:248: TestNetworkPlugins/group/kubenet logs: -- stdout -- * * ==> Audit <== * |---------|--------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|--------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------| | delete | -p | default-k8s-different-port-20210507222942-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:34:54 UTC | Fri, 07 May 2021 22:34:55 UTC | | | default-k8s-different-port-20210507222942-391940 | | | | | | | start | -p auto-20210507223250-391940 | auto-20210507223250-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:32:50 UTC | Fri, 07 May 2021 22:35:18 UTC | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --driver=docker | | | | | | | | --container-runtime=containerd | | | | | | | ssh | -p auto-20210507223250-391940 | auto-20210507223250-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:35:18 UTC | Fri, 07 May 2021 22:35:18 UTC | | | pgrep -a kubelet | | | | | | | start | -p | cilium-20210507223455-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:34:55 UTC | Fri, 07 May 2021 22:37:15 UTC | | | cilium-20210507223455-391940 | | | | | | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=cilium --driver=docker | | | | | | | | --container-runtime=containerd | | | | | | | ssh | -p | cilium-20210507223455-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:37:20 UTC | Fri, 07 May 2021 22:37:21 UTC | | | cilium-20210507223455-391940 | | | | | | | | pgrep -a kubelet | | | | | | | delete | -p | cilium-20210507223455-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:37:29 UTC | Fri, 07 May 2021 22:37:33 UTC | | | cilium-20210507223455-391940 | | | | | | | -p | auto-20210507223250-391940 | auto-20210507223250-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:38:09 UTC | Fri, 07 May 2021 22:38:10 UTC | | | logs -n 25 | | | | | | | delete | -p auto-20210507223250-391940 | auto-20210507223250-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:38:11 UTC | Fri, 07 May 2021 22:38:14 UTC | | start | -p | calico-20210507223733-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:37:33 UTC | Fri, 07 May 2021 22:39:58 UTC | | | calico-20210507223733-391940 | | | | | | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=calico --driver=docker | | | | | | | | --container-runtime=containerd | | | | | | | ssh | -p | calico-20210507223733-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:40:03 UTC | Fri, 07 May 2021 22:40:03 UTC | | | calico-20210507223733-391940 | | | | | | | | pgrep -a kubelet | | | | | | | start | -p | custom-weave-20210507223739-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:37:39 UTC | Fri, 07 May 2021 22:40:11 UTC | | | custom-weave-20210507223739-391940 | | | | | | | | --memory=2048 --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=testdata/weavenet.yaml | | | | | | | | --driver=docker | | | | | | | | --container-runtime=containerd | | | | | | | ssh | -p | custom-weave-20210507223739-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:40:11 UTC | Fri, 07 May 2021 22:40:12 UTC | | | custom-weave-20210507223739-391940 | | | | | | | | pgrep -a kubelet | | | | | | | delete | -p | calico-20210507223733-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:40:13 UTC | Fri, 07 May 2021 22:40:17 UTC | | | calico-20210507223733-391940 | | | | | | | delete | -p | custom-weave-20210507223739-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:40:20 UTC | Fri, 07 May 2021 22:40:24 UTC | | | custom-weave-20210507223739-391940 | | | | | | | start | -p | enable-default-cni-20210507223814-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:38:14 UTC | Fri, 07 May 2021 22:40:30 UTC | | | enable-default-cni-20210507223814-391940 | | | | | | | | --memory=2048 --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --enable-default-cni=true | | | | | | | | --driver=docker | | | | | | | | --container-runtime=containerd | | | | | | | ssh | -p | enable-default-cni-20210507223814-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:40:30 UTC | Fri, 07 May 2021 22:40:30 UTC | | | enable-default-cni-20210507223814-391940 | | | | | | | | pgrep -a kubelet | | | | | | | delete | -p | enable-default-cni-20210507223814-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:40:49 UTC | Fri, 07 May 2021 22:40:52 UTC | | | enable-default-cni-20210507223814-391940 | | | | | | | start | -p | kindnet-20210507224017-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:40:17 UTC | Fri, 07 May 2021 22:42:19 UTC | | | kindnet-20210507224017-391940 | | | | | | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=kindnet --driver=docker | | | | | | | | --container-runtime=containerd | | | | | | | ssh | -p | kindnet-20210507224017-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:42:24 UTC | Fri, 07 May 2021 22:42:25 UTC | | | kindnet-20210507224017-391940 | | | | | | | | pgrep -a kubelet | | | | | | | delete | -p | kindnet-20210507224017-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:42:34 UTC | Fri, 07 May 2021 22:42:37 UTC | | | kindnet-20210507224017-391940 | | | | | | | start | -p | bridge-20210507224024-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:40:24 UTC | Fri, 07 May 2021 22:43:07 UTC | | | bridge-20210507224024-391940 | | | | | | | | --memory=2048 | | | | | | | | --alsologtostderr | | | | | | | | --wait=true --wait-timeout=5m | | | | | | | | --cni=bridge --driver=docker | | | | | | | | --container-runtime=containerd | | | | | | | ssh | -p | bridge-20210507224024-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:43:07 UTC | Fri, 07 May 2021 22:43:07 UTC | | | bridge-20210507224024-391940 | | | | | | | | pgrep -a kubelet | | | | | | | delete | -p | bridge-20210507224024-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:43:16 UTC | Fri, 07 May 2021 22:43:19 UTC | | | bridge-20210507224024-391940 | | | | | | | -p | false-20210507223341-391940 | false-20210507223341-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:44:19 UTC | Fri, 07 May 2021 22:44:20 UTC | | | logs -n 25 | | | | | | | delete | -p false-20210507223341-391940 | false-20210507223341-391940 | jenkins | v1.20.0 | Fri, 07 May 2021 22:44:20 UTC | Fri, 07 May 2021 22:44:23 UTC | |---------|--------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2021/05/07 22:40:52 Running on machine: debian-jenkins-agent-11 Binary: Built with gc go1.16.1 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0507 22:40:52.878518 672811 out.go:291] Setting OutFile to fd 1 ... I0507 22:40:52.878673 672811 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:40:52.878682 672811 out.go:304] Setting ErrFile to fd 2... I0507 22:40:52.878685 672811 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:40:52.878775 672811 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/bin I0507 22:40:52.879029 672811 out.go:298] Setting JSON to false I0507 22:40:52.914708 672811 start.go:108] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":12020,"bootTime":1620415232,"procs":350,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"} I0507 22:40:52.914791 672811 start.go:118] virtualization: kvm guest I0507 22:40:52.917552 672811 out.go:170] * [kubenet-20210507224052-391940] minikube v1.20.0 on Debian 9.13 (kvm/amd64) I0507 22:40:52.919004 672811 out.go:170] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig I0507 22:40:52.920381 672811 out.go:170] - MINIKUBE_BIN=out/minikube-linux-amd64 I0507 22:40:52.921826 672811 out.go:170] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube I0507 22:40:52.923176 672811 out.go:170] - MINIKUBE_LOCATION=master I0507 22:40:52.923813 672811 driver.go:322] Setting default libvirt URI to qemu:///system I0507 22:40:52.971346 672811 docker.go:119] docker version: linux-19.03.15 I0507 22:40:52.971454 672811 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0507 22:40:53.057850 672811 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:79 SystemTime:2021-05-07 22:40:53.008217117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0507 22:40:53.057941 672811 docker.go:225] overlay module found I0507 22:40:53.060186 672811 out.go:170] * Using the docker driver based on user configuration I0507 22:40:53.060214 672811 start.go:276] selected driver: docker I0507 22:40:53.060222 672811 start.go:718] validating driver "docker" against I0507 22:40:53.060244 672811 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} W0507 22:40:53.060288 672811 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0507 22:40:53.060303 672811 out.go:424] no arguments passed for "! Your cgroup does not allow setting memory.\n" - returning raw string W0507 22:40:53.060323 672811 out.go:235] ! Your cgroup does not allow setting memory. W0507 22:40:53.060334 672811 out.go:424] no arguments passed for " - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities\n" - returning raw string I0507 22:40:53.061888 672811 out.go:170] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0507 22:40:53.062981 672811 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0507 22:40:53.161855 672811 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:79 SystemTime:2021-05-07 22:40:53.100417898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I0507 22:40:53.162014 672811 start_flags.go:259] no existing cluster config was found, will generate one from the flags I0507 22:40:53.162237 672811 start_flags.go:733] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] I0507 22:40:53.162265 672811 cni.go:89] network plugin configured as "kubenet", returning disabled I0507 22:40:53.162274 672811 start_flags.go:273] config: {Name:kubenet-20210507224052-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:kubenet-20210507224052-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0507 22:40:53.165187 672811 out.go:170] * Starting control plane node kubenet-20210507224052-391940 in cluster kubenet-20210507224052-391940 I0507 22:40:53.165236 672811 cache.go:111] Beginning downloading kic base image for docker with containerd W0507 22:40:53.165246 672811 out.go:424] no arguments passed for "* Pulling base image ...\n" - returning raw string W0507 22:40:53.165261 672811 out.go:424] no arguments passed for "* Pulling base image ...\n" - returning raw string I0507 22:40:53.166925 672811 out.go:170] * Pulling base image ... I0507 22:40:53.166966 672811 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd I0507 22:40:53.167001 672811 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 I0507 22:40:53.167016 672811 cache.go:54] Caching tarball of preloaded images I0507 22:40:53.167026 672811 image.go:116] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory I0507 22:40:53.167043 672811 preload.go:132] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download I0507 22:40:53.167054 672811 cache.go:57] Finished verifying existence of preloaded tar for v1.20.2 on containerd I0507 22:40:53.167059 672811 image.go:119] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory, skipping pull I0507 22:40:53.167071 672811 cache.go:131] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in cache, skipping pull I0507 22:40:53.167104 672811 image.go:130] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon I0507 22:40:53.167176 672811 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/config.json ... I0507 22:40:53.167206 672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/config.json: {Name:mk6f7d3b17ed614f6ce609cdf1a5d1f675228263 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:40:53.247777 672811 image.go:134] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon, skipping pull I0507 22:40:53.247803 672811 cache.go:155] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in daemon, skipping pull I0507 22:40:53.247832 672811 cache.go:194] Successfully downloaded all kic artifacts I0507 22:40:53.247867 672811 start.go:313] acquiring machines lock for kubenet-20210507224052-391940: {Name:mk343db27c7581f71b72b6b890cfa139aa788b8d Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0507 22:40:53.247996 672811 start.go:317] acquired machines lock for "kubenet-20210507224052-391940" in 107.964µs I0507 22:40:53.248026 672811 start.go:89] Provisioning new machine with config: &{Name:kubenet-20210507224052-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:kubenet-20210507224052-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} I0507 22:40:53.248124 672811 start.go:126] createHost starting for "" (driver="docker") I0507 22:40:52.510485 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:53.011326 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:53.510551 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:54.010720 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:54.510431 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:55.011045 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:55.510393 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:56.010646 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:56.511146 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:57.010497 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:53.250851 672811 out.go:197] * Creating docker container (CPUs=2, Memory=2048MB) ... I0507 22:40:53.251111 672811 start.go:160] libmachine.API.Create for "kubenet-20210507224052-391940" (driver="docker") I0507 22:40:53.251145 672811 client.go:168] LocalClient.Create starting I0507 22:40:53.251244 672811 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem I0507 22:40:53.251275 672811 main.go:128] libmachine: Decoding PEM data... I0507 22:40:53.251311 672811 main.go:128] libmachine: Parsing certificate... I0507 22:40:53.251453 672811 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem I0507 22:40:53.251479 672811 main.go:128] libmachine: Decoding PEM data... I0507 22:40:53.251496 672811 main.go:128] libmachine: Parsing certificate... I0507 22:40:53.251894 672811 cli_runner.go:115] Run: docker network inspect kubenet-20210507224052-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0507 22:40:53.291644 672811 cli_runner.go:162] docker network inspect kubenet-20210507224052-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0507 22:40:53.291722 672811 network_create.go:249] running [docker network inspect kubenet-20210507224052-391940] to gather additional debugging logs... I0507 22:40:53.291743 672811 cli_runner.go:115] Run: docker network inspect kubenet-20210507224052-391940 W0507 22:40:53.341497 672811 cli_runner.go:162] docker network inspect kubenet-20210507224052-391940 returned with exit code 1 I0507 22:40:53.341550 672811 network_create.go:252] error running [docker network inspect kubenet-20210507224052-391940]: docker network inspect kubenet-20210507224052-391940: exit status 1 stdout: [] stderr: Error: No such network: kubenet-20210507224052-391940 I0507 22:40:53.341581 672811 network_create.go:254] output of [docker network inspect kubenet-20210507224052-391940]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: kubenet-20210507224052-391940 ** /stderr ** I0507 22:40:53.342256 672811 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0507 22:40:53.385054 672811 network.go:215] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b7a55e9e83b1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:be:99:f6:89}} I0507 22:40:53.386400 672811 network.go:263] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc000374028] misses:0} I0507 22:40:53.386443 672811 network.go:210] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0507 22:40:53.386463 672811 network_create.go:100] attempt to create docker network kubenet-20210507224052-391940 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ... I0507 22:40:53.386518 672811 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20210507224052-391940 I0507 22:40:53.469239 672811 network_create.go:84] docker network kubenet-20210507224052-391940 192.168.58.0/24 created I0507 22:40:53.469289 672811 kic.go:106] calculated static IP "192.168.58.2" for the "kubenet-20210507224052-391940" container I0507 22:40:53.469371 672811 cli_runner.go:115] Run: docker ps -a --format {{.Names}} I0507 22:40:53.510838 672811 cli_runner.go:115] Run: docker volume create kubenet-20210507224052-391940 --label name.minikube.sigs.k8s.io=kubenet-20210507224052-391940 --label created_by.minikube.sigs.k8s.io=true I0507 22:40:53.559162 672811 oci.go:102] Successfully created a docker volume kubenet-20210507224052-391940 I0507 22:40:53.559286 672811 cli_runner.go:115] Run: docker run --rm --name kubenet-20210507224052-391940-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20210507224052-391940 --entrypoint /usr/bin/test -v kubenet-20210507224052-391940:/var gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib I0507 22:40:54.328995 672811 oci.go:106] Successfully prepared a docker volume kubenet-20210507224052-391940 W0507 22:40:54.329069 672811 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0507 22:40:54.329079 672811 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0507 22:40:54.329130 672811 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'" I0507 22:40:54.329143 672811 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd I0507 22:40:54.329178 672811 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 I0507 22:40:54.329192 672811 kic.go:179] Starting extracting preloaded images to volume ... I0507 22:40:54.329240 672811 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20210507224052-391940:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir I0507 22:40:54.427070 672811 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-20210507224052-391940 --name kubenet-20210507224052-391940 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20210507224052-391940 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-20210507224052-391940 --network kubenet-20210507224052-391940 --ip 192.168.58.2 --volume kubenet-20210507224052-391940:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e I0507 22:40:55.043077 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Running}} I0507 22:40:55.107025 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}} I0507 22:40:55.165720 672811 cli_runner.go:115] Run: docker exec kubenet-20210507224052-391940 stat /var/lib/dpkg/alternatives/iptables I0507 22:40:55.317730 672811 oci.go:278] the created container "kubenet-20210507224052-391940" has a running status. I0507 22:40:55.317785 672811 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa... I0507 22:40:55.465459 672811 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0507 22:40:55.874845 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}} I0507 22:40:55.926608 672811 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0507 22:40:55.926628 672811 kic_runner.go:115] Args: [docker exec --privileged kubenet-20210507224052-391940 chown docker:docker /home/docker/.ssh/authorized_keys] W0507 22:41:01.540230 668555 out.go:424] no arguments passed for " - Generating certificates and keys ..." - returning raw string W0507 22:41:01.540268 668555 out.go:424] no arguments passed for " - Generating certificates and keys ..." - returning raw string I0507 22:41:01.541677 668555 out.go:197] - Generating certificates and keys ... W0507 22:41:01.543231 668555 out.go:424] no arguments passed for " - Booting up control plane ..." - returning raw string W0507 22:41:01.543251 668555 out.go:424] no arguments passed for " - Booting up control plane ..." - returning raw string I0507 22:41:01.544669 668555 out.go:197] - Booting up control plane ... W0507 22:41:01.545900 668555 out.go:424] no arguments passed for " - Configuring RBAC rules ..." - returning raw string W0507 22:41:01.545925 668555 out.go:424] no arguments passed for " - Configuring RBAC rules ..." - returning raw string I0507 22:41:01.547547 668555 out.go:197] - Configuring RBAC rules ... I0507 22:41:01.550103 668555 cni.go:93] Creating CNI manager for "bridge" I0507 22:40:57.511204 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:58.010592 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:58.510979 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:59.010657 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:59.511181 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:00.010935 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:00.510644 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:01.010844 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:01.510822 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:02.010401 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:40:59.038214 672811 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20210507224052-391940:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir: (4.708855042s) I0507 22:40:59.038245 672811 kic.go:188] duration metric: took 4.709051 seconds to extract preloaded images to volume I0507 22:40:59.038321 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}} I0507 22:40:59.081058 672811 machine.go:88] provisioning docker machine ... I0507 22:40:59.081096 672811 ubuntu.go:169] provisioning hostname "kubenet-20210507224052-391940" I0507 22:40:59.081153 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:40:59.119701 672811 main.go:128] libmachine: Using SSH client type: native I0507 22:40:59.119896 672811 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 33326 } I0507 22:40:59.119916 672811 main.go:128] libmachine: About to run SSH command: sudo hostname kubenet-20210507224052-391940 && echo "kubenet-20210507224052-391940" | sudo tee /etc/hostname I0507 22:40:59.251144 672811 main.go:128] libmachine: SSH cmd err, output: : kubenet-20210507224052-391940 I0507 22:40:59.251212 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:40:59.290133 672811 main.go:128] libmachine: Using SSH client type: native I0507 22:40:59.290316 672811 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802720] 0x8026e0 [] 0s} 127.0.0.1 33326 } I0507 22:40:59.290356 672811 main.go:128] libmachine: About to run SSH command: if ! grep -xq '.*\skubenet-20210507224052-391940' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-20210507224052-391940/g' /etc/hosts; else echo '127.0.1.1 kubenet-20210507224052-391940' | sudo tee -a /etc/hosts; fi fi I0507 22:40:59.403817 672811 main.go:128] libmachine: SSH cmd err, output: : I0507 22:40:59.403851 672811 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube} I0507 22:40:59.403874 672811 ubuntu.go:177] setting up certificates I0507 22:40:59.403887 672811 provision.go:83] configureAuth start I0507 22:40:59.403966 672811 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20210507224052-391940 I0507 22:40:59.447361 672811 provision.go:137] copyHostCerts I0507 22:40:59.447423 672811 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem, removing ... I0507 22:40:59.447435 672811 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem I0507 22:40:59.447489 672811 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem (1078 bytes) I0507 22:40:59.447657 672811 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem, removing ... I0507 22:40:59.447677 672811 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem I0507 22:40:59.447707 672811 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem (1123 bytes) I0507 22:40:59.447795 672811 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem, removing ... I0507 22:40:59.447805 672811 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem I0507 22:40:59.447843 672811 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem (1675 bytes) I0507 22:40:59.447895 672811 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem org=jenkins.kubenet-20210507224052-391940 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube kubenet-20210507224052-391940] I0507 22:40:59.852941 672811 provision.go:165] copyRemoteCerts I0507 22:40:59.853012 672811 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0507 22:40:59.853074 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:40:59.896021 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:40:59.978856 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes) I0507 22:40:59.995226 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes) I0507 22:41:00.011913 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0507 22:41:00.027622 672811 provision.go:86] duration metric: configureAuth took 623.719966ms I0507 22:41:00.027644 672811 ubuntu.go:193] setting minikube options for container-runtime I0507 22:41:00.027808 672811 machine.go:91] provisioned docker machine in 946.729843ms I0507 22:41:00.027821 672811 client.go:171] LocalClient.Create took 6.776670216s I0507 22:41:00.027841 672811 start.go:168] duration metric: libmachine.API.Create for "kubenet-20210507224052-391940" took 6.776727752s I0507 22:41:00.027849 672811 start.go:267] post-start starting for "kubenet-20210507224052-391940" (driver="docker") I0507 22:41:00.027855 672811 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0507 22:41:00.027897 672811 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0507 22:41:00.027946 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:41:00.075235 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:41:00.166719 672811 ssh_runner.go:149] Run: cat /etc/os-release I0507 22:41:00.169362 672811 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0507 22:41:00.169391 672811 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0507 22:41:00.169407 672811 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0507 22:41:00.169419 672811 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0507 22:41:00.169433 672811 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/addons for local assets ... I0507 22:41:00.169503 672811 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/files for local assets ... I0507 22:41:00.169623 672811 start.go:270] post-start completed in 141.767397ms I0507 22:41:00.169915 672811 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20210507224052-391940 I0507 22:41:00.210576 672811 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/config.json ... I0507 22:41:00.210783 672811 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0507 22:41:00.210835 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:41:00.247709 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:41:00.327660 672811 start.go:129] duration metric: createHost completed in 7.07952255s I0507 22:41:00.327686 672811 start.go:80] releasing machines lock for "kubenet-20210507224052-391940", held for 7.079675771s I0507 22:41:00.327754 672811 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20210507224052-391940 I0507 22:41:00.367166 672811 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/ I0507 22:41:00.367177 672811 ssh_runner.go:149] Run: systemctl --version I0507 22:41:00.367228 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:41:00.367248 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:41:00.408143 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:41:00.408527 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:41:00.487269 672811 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio I0507 22:41:00.537919 672811 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker I0507 22:41:00.547201 672811 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket I0507 22:41:00.564793 672811 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service I0507 22:41:00.574599 672811 ssh_runner.go:149] Run: sudo systemctl disable docker.socket I0507 22:41:00.638969 672811 ssh_runner.go:149] Run: sudo systemctl mask docker.service I0507 22:41:00.698972 672811 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker I0507 22:41:00.709630 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0507 22:41:00.723315 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjIiCiAgICBzdGF0c19jb2xsZWN0X3BlcmlvZCA9IDEwCiAgICBzeXN0ZW1kX2Nncm91cCA9IGZhbHNlCiAgICBlbmFibGVfdGxzX3N0cmVhbWluZyA9IGZhbHNlCiAgICBtYXhfY29udGFpbmVyX2xvZ19saW5lX3NpemUgPSAxNjM4NAogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgbm9fcGl2b3QgPSB0cnVlCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW50aW1lLnYxLmxpbnV4IgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMubGludXhdCiAgICBzaGltID0gImNvbnRhaW5lcmQtc2hpbSIKICAgIHJ1bnRpbWUgPSAicnVuYyIKICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICBub19zaGltID0gZmFsc2UKICAgIHNoaW1fZGVidWcgPSBmYWxzZQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml" I0507 22:41:00.737455 672811 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0507 22:41:00.744876 672811 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255 stdout: stderr: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory I0507 22:41:00.744933 672811 ssh_runner.go:149] Run: sudo modprobe br_netfilter I0507 22:41:00.753834 672811 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0507 22:41:00.761420 672811 ssh_runner.go:149] Run: sudo systemctl daemon-reload I0507 22:41:00.827226 672811 ssh_runner.go:149] Run: sudo systemctl restart containerd I0507 22:41:00.892592 672811 start.go:368] Will wait 60s for socket path /run/containerd/containerd.sock I0507 22:41:00.892666 672811 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock I0507 22:41:00.896809 672811 start.go:393] Will wait 60s for crictl version I0507 22:41:00.896869 672811 ssh_runner.go:149] Run: sudo crictl version I0507 22:41:00.922312 672811 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1 stdout: stderr: time="2021-05-07T22:41:00Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet" I0507 22:41:01.551679 668555 out.go:170] * Configuring bridge CNI (Container Networking Interface) ... I0507 22:41:01.551743 668555 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d I0507 22:41:01.559637 668555 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes) I0507 22:41:01.574185 668555 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0507 22:41:01.574279 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:01.574292 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=c31bd57f93d45726e4bd30607374f8c720e70e95 minikube.k8s.io/name=bridge-20210507224024-391940 minikube.k8s.io/updated_at=2021_05_07T22_41_01_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:01.648795 668555 ops.go:34] apiserver oom_adj: -16 I0507 22:41:01.648816 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:02.464408 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:02.964413 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:03.464807 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:03.964629 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:02.510806 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:03.010976 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:04.084323 666230 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.073303817s) I0507 22:41:04.510723 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:05.011061 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:05.510498 666230 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:05.650019 666230 kubeadm.go:977] duration metric: took 15.318226491s to wait for elevateKubeSystemPrivileges. I0507 22:41:05.650056 666230 kubeadm.go:383] StartCluster complete in 39.827577068s I0507 22:41:05.650087 666230 settings.go:142] acquiring lock: {Name:mkbc12d45ea1a96167acb2e3885011008220fc1e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:05.650199 666230 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig I0507 22:41:05.652403 666230 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig: {Name:mk53c460e0a047a0806c95f27e36717b9bf9f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:06.169003 666230 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20210507224017-391940" rescaled to 1 I0507 22:41:06.169047 666230 start.go:201] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} W0507 22:41:06.169072 666230 out.go:424] no arguments passed for "* Verifying Kubernetes components...\n" - returning raw string W0507 22:41:06.169089 666230 out.go:424] no arguments passed for "* Verifying Kubernetes components...\n" - returning raw string I0507 22:41:06.171052 666230 out.go:170] * Verifying Kubernetes components... I0507 22:41:06.169091 666230 addons.go:328] enableAddons start: toEnable=map[], additional=[] I0507 22:41:06.171170 666230 addons.go:55] Setting storage-provisioner=true in profile "kindnet-20210507224017-391940" I0507 22:41:06.171203 666230 addons.go:131] Setting addon storage-provisioner=true in "kindnet-20210507224017-391940" I0507 22:41:06.169369 666230 cache.go:108] acquiring lock: {Name:mk66f3ed174a0fda2e3a4fd9a235ceef9553bc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:} W0507 22:41:06.171217 666230 addons.go:140] addon storage-provisioner should already be in state true I0507 22:41:06.171236 666230 host.go:66] Checking if "kindnet-20210507224017-391940" exists ... I0507 22:41:06.171243 666230 addons.go:55] Setting default-storageclass=true in profile "kindnet-20210507224017-391940" I0507 22:41:06.171257 666230 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20210507224017-391940" I0507 22:41:06.171107 666230 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0507 22:41:06.171307 666230 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 exists I0507 22:41:06.171446 666230 cache.go:97] cache image "minikube-local-cache-test:functional-20210507215728-391940" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940" took 2.079162ms I0507 22:41:06.171484 666230 cache.go:81] save to tar file minikube-local-cache-test:functional-20210507215728-391940 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 succeeded I0507 22:41:06.171529 666230 cache.go:88] Successfully saved all images to host disk. I0507 22:41:06.171641 666230 cli_runner.go:115] Run: docker container inspect kindnet-20210507224017-391940 --format={{.State.Status}} I0507 22:41:06.171838 666230 cli_runner.go:115] Run: docker container inspect kindnet-20210507224017-391940 --format={{.State.Status}} I0507 22:41:06.171967 666230 cli_runner.go:115] Run: docker container inspect kindnet-20210507224017-391940 --format={{.State.Status}} I0507 22:41:06.186678 666230 node_ready.go:35] waiting up to 5m0s for node "kindnet-20210507224017-391940" to be "Ready" ... I0507 22:41:05.865244 634245 system_pods.go:86] 7 kube-system pods found I0507 22:41:05.865283 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:41:05.865291 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:41:05.865296 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:41:05.865301 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:41:05.865306 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:41:05.865310 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:41:05.865314 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:41:05.865327 634245 retry.go:31] will retry after 40.022161579s: missing components: kube-dns I0507 22:41:06.223381 666230 out.go:170] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0507 22:41:06.223643 666230 addons.go:261] installing /etc/kubernetes/addons/storage-provisioner.yaml I0507 22:41:06.223663 666230 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0507 22:41:06.223745 666230 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210507224017-391940 I0507 22:41:06.229977 666230 ssh_runner.go:149] Run: sudo crictl images --output json I0507 22:41:06.230023 666230 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210507224017-391940 I0507 22:41:06.235459 666230 addons.go:131] Setting addon default-storageclass=true in "kindnet-20210507224017-391940" W0507 22:41:06.235494 666230 addons.go:140] addon default-storageclass should already be in state true I0507 22:41:06.235598 666230 host.go:66] Checking if "kindnet-20210507224017-391940" exists ... I0507 22:41:06.236177 666230 cli_runner.go:115] Run: docker container inspect kindnet-20210507224017-391940 --format={{.State.Status}} I0507 22:41:06.277133 666230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33316 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kindnet-20210507224017-391940/id_rsa Username:docker} I0507 22:41:06.278233 666230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33316 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kindnet-20210507224017-391940/id_rsa Username:docker} I0507 22:41:06.285800 666230 addons.go:261] installing /etc/kubernetes/addons/storageclass.yaml I0507 22:41:06.285827 666230 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0507 22:41:06.285885 666230 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210507224017-391940 I0507 22:41:06.325091 666230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33316 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kindnet-20210507224017-391940/id_rsa Username:docker} I0507 22:41:06.368756 666230 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0507 22:41:06.380026 666230 containerd.go:567] couldn't find preloaded image for "docker.io/minikube-local-cache-test:functional-20210507215728-391940". assuming images are not preloaded. I0507 22:41:06.380052 666230 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210507215728-391940] I0507 22:41:06.380113 666230 image.go:320] retrieving image: minikube-local-cache-test:functional-20210507215728-391940 I0507 22:41:06.380159 666230 image.go:326] checking repository: index.docker.io/library/minikube-local-cache-test I0507 22:41:06.436838 666230 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml W0507 22:41:06.616626 666230 image.go:333] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details) I0507 22:41:06.616662 666230 image.go:334] short name: minikube-local-cache-test:functional-20210507215728-391940 I0507 22:41:06.617454 666230 image.go:362] daemon lookup for minikube-local-cache-test:functional-20210507215728-391940: Error response from daemon: reference does not exist I0507 22:41:06.680519 666230 out.go:170] * Enabled addons: storage-provisioner, default-storageclass I0507 22:41:06.680546 666230 addons.go:330] enableAddons completed in 511.473083ms W0507 22:41:06.761518 666230 image.go:372] authn lookup for minikube-local-cache-test:functional-20210507215728-391940 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:41:06.906593 666230 image.go:376] remote lookup for minikube-local-cache-test:functional-20210507215728-391940: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:41:06.906638 666230 image.go:98] error retrieve Image minikube-local-cache-test:functional-20210507215728-391940 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:41:06.906666 666230 cache_images.go:106] "minikube-local-cache-test:functional-20210507215728-391940" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210507215728-391940 I0507 22:41:06.906687 666230 cache_images.go:271] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:06.906785 666230 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:06.909940 666230 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940': No such file or directory I0507 22:41:06.909963 666230 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 (5120 bytes) I0507 22:41:06.927085 666230 containerd.go:267] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:06.927131 666230 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:07.049626 666230 cache_images.go:293] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 from cache I0507 22:41:07.049657 666230 cache_images.go:113] Successfully loaded all cached images I0507 22:41:07.049665 666230 cache_images.go:82] LoadImages completed in 669.60307ms I0507 22:41:07.049676 666230 cache_images.go:252] succeeded pushing to: kindnet-20210507224017-391940 I0507 22:41:07.049680 666230 cache_images.go:253] failed pushing to: I0507 22:41:04.464364 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:04.964435 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:05.464578 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:05.964851 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:06.464494 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:06.964340 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:07.464110 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:07.964700 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:08.464459 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:08.964649 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:11.971610 672811 ssh_runner.go:149] Run: sudo crictl version I0507 22:41:12.042782 672811 start.go:402] Version: 0.1.0 RuntimeName: containerd RuntimeVersion: 1.4.4 RuntimeApiVersion: v1alpha2 I0507 22:41:12.042850 672811 ssh_runner.go:149] Run: containerd --version I0507 22:41:08.193527 666230 node_ready.go:58] node "kindnet-20210507224017-391940" has status "Ready":"False" I0507 22:41:10.195106 666230 node_ready.go:58] node "kindnet-20210507224017-391940" has status "Ready":"False" I0507 22:41:12.066863 672811 out.go:170] * Preparing Kubernetes v1.20.2 on containerd 1.4.4 ... I0507 22:41:12.066969 672811 cli_runner.go:115] Run: docker network inspect kubenet-20210507224052-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0507 22:41:12.105280 672811 ssh_runner.go:149] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts I0507 22:41:12.108647 672811 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0507 22:41:12.117548 672811 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/client.crt I0507 22:41:12.117660 672811 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/client.key I0507 22:41:12.117779 672811 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd I0507 22:41:12.117805 672811 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 I0507 22:41:12.117839 672811 ssh_runner.go:149] Run: sudo crictl images --output json I0507 22:41:12.139675 672811 containerd.go:571] all images are preloaded for containerd runtime. I0507 22:41:12.139694 672811 containerd.go:481] Images already preloaded, skipping extraction I0507 22:41:12.139737 672811 ssh_runner.go:149] Run: sudo crictl images --output json I0507 22:41:12.160780 672811 containerd.go:571] all images are preloaded for containerd runtime. I0507 22:41:12.160799 672811 cache_images.go:74] Images are preloaded, skipping loading I0507 22:41:12.160836 672811 ssh_runner.go:149] Run: sudo crictl info I0507 22:41:12.181806 672811 cni.go:89] network plugin configured as "kubenet", returning disabled I0507 22:41:12.181827 672811 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0507 22:41:12.181838 672811 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-20210507224052-391940 NodeName:kubenet-20210507224052-391940 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0507 22:41:12.181948 672811 kubeadm.go:157] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.58.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /run/containerd/containerd.sock name: "kubenet-20210507224052-391940" kubeletExtraArgs: node-ip: 192.168.58.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.58.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.20.2 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 I0507 22:41:12.182024 672811 kubeadm.go:901] kubelet [Unit] Wants=containerd.service [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubenet-20210507224052-391940 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=kubenet --node-ip=192.168.58.2 --pod-cidr=10.244.0.0/16 --runtime-request-timeout=15m [Install] config: {KubernetesVersion:v1.20.2 ClusterName:kubenet-20210507224052-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0507 22:41:12.182065 672811 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2 I0507 22:41:12.190005 672811 binaries.go:44] Found k8s binaries, skipping transfer I0507 22:41:12.190053 672811 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0507 22:41:12.196524 672811 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (572 bytes) I0507 22:41:12.208112 672811 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0507 22:41:12.219787 672811 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1868 bytes) I0507 22:41:12.234762 672811 ssh_runner.go:149] Run: grep 192.168.58.2 control-plane.minikube.internal$ /etc/hosts I0507 22:41:12.238162 672811 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0507 22:41:12.247659 672811 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940 for IP: 192.168.58.2 I0507 22:41:12.247732 672811 certs.go:171] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.key I0507 22:41:12.247761 672811 certs.go:171] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.key I0507 22:41:12.247864 672811 certs.go:282] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/client.key I0507 22:41:12.247917 672811 certs.go:286] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key.cee25041 I0507 22:41:12.247934 672811 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1] I0507 22:41:12.324253 672811 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt.cee25041 ... I0507 22:41:12.324281 672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt.cee25041: {Name:mk17a9fadc289bdd993cd89cf73f7e42a11db951 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:12.324441 672811 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key.cee25041 ... I0507 22:41:12.324457 672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key.cee25041: {Name:mk4f1b00ef492dfe1e4e53295535dd818e4b8776 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:12.324556 672811 certs.go:297] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt I0507 22:41:12.324624 672811 certs.go:301] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key I0507 22:41:12.324690 672811 certs.go:286] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.key I0507 22:41:12.324704 672811 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.crt with IP's: [] I0507 22:41:12.462717 672811 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.crt ... I0507 22:41:12.462741 672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.crt: {Name:mk3b377543768468ecb5ae6c2ac7692fea50fd9a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:12.462892 672811 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.key ... I0507 22:41:12.462906 672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.key: {Name:mkfe92c524b556c20012d8a91c085ac4bc69ff7a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:12.463104 672811 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940.pem (1338 bytes) W0507 22:41:12.463147 672811 certs.go:357] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940_empty.pem, impossibly tiny 0 bytes I0507 22:41:12.463164 672811 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem (1679 bytes) I0507 22:41:12.463201 672811 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem (1078 bytes) I0507 22:41:12.463240 672811 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem (1123 bytes) I0507 22:41:12.463276 672811 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/key.pem (1675 bytes) I0507 22:41:12.464251 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0507 22:41:12.481245 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0507 22:41:12.549535 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0507 22:41:12.567323 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0507 22:41:12.586572 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0507 22:41:12.605164 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0507 22:41:12.622859 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0507 22:41:12.639720 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0507 22:41:12.659044 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940.pem --> /usr/share/ca-certificates/391940.pem (1338 bytes) I0507 22:41:12.677161 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0507 22:41:12.693007 672811 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0507 22:41:12.704857 672811 ssh_runner.go:149] Run: openssl version I0507 22:41:12.709921 672811 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0507 22:41:12.717584 672811 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0507 22:41:12.720534 672811 certs.go:402] hashing: -rw-r--r-- 1 root root 1111 May 7 21:50 /usr/share/ca-certificates/minikubeCA.pem I0507 22:41:12.720581 672811 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0507 22:41:12.725167 672811 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0507 22:41:12.731804 672811 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391940.pem && ln -fs /usr/share/ca-certificates/391940.pem /etc/ssl/certs/391940.pem" I0507 22:41:12.738661 672811 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/391940.pem I0507 22:41:12.741622 672811 certs.go:402] hashing: -rw-r--r-- 1 root root 1338 May 7 21:57 /usr/share/ca-certificates/391940.pem I0507 22:41:12.741658 672811 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391940.pem I0507 22:41:12.746205 672811 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391940.pem /etc/ssl/certs/51391683.0" I0507 22:41:12.752891 672811 kubeadm.go:381] StartCluster: {Name:kubenet-20210507224052-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:kubenet-20210507224052-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0507 22:41:12.752980 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]} I0507 22:41:12.753082 672811 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system" I0507 22:41:12.775624 672811 cri.go:76] found id: "" I0507 22:41:12.775678 672811 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0507 22:41:12.781880 672811 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0507 22:41:12.788117 672811 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver I0507 22:41:12.788153 672811 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0507 22:41:12.794718 672811 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0507 22:41:12.794764 672811 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0507 22:41:09.464030 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:09.963853 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:10.463804 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:10.964279 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:11.463864 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:11.964595 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:12.464551 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:12.964405 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:13.464692 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:13.964375 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:12.695319 666230 node_ready.go:58] node "kindnet-20210507224017-391940" has status "Ready":"False" I0507 22:41:15.194523 666230 node_ready.go:58] node "kindnet-20210507224017-391940" has status "Ready":"False" I0507 22:41:15.694690 666230 node_ready.go:49] node "kindnet-20210507224017-391940" has status "Ready":"True" I0507 22:41:15.694718 666230 node_ready.go:38] duration metric: took 9.507990643s waiting for node "kindnet-20210507224017-391940" to be "Ready" ... I0507 22:41:15.694731 666230 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0507 22:41:15.704478 666230 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-z2xcz" in "kube-system" namespace to be "Ready" ... I0507 22:41:14.464494 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:14.964316 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:15.464752 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:15.964324 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:16.464135 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:16.964715 668555 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:17.057780 668555 kubeadm.go:977] duration metric: took 15.483566448s to wait for elevateKubeSystemPrivileges. I0507 22:41:17.057810 668555 kubeadm.go:383] StartCluster complete in 32.505316012s I0507 22:41:17.057831 668555 settings.go:142] acquiring lock: {Name:mkbc12d45ea1a96167acb2e3885011008220fc1e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:17.057916 668555 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig I0507 22:41:17.059650 668555 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig: {Name:mk53c460e0a047a0806c95f27e36717b9bf9f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:17.576617 668555 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "bridge-20210507224024-391940" rescaled to 1 I0507 22:41:17.576672 668555 start.go:201] Will wait 5m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} W0507 22:41:17.576706 668555 out.go:424] no arguments passed for "* Verifying Kubernetes components...\n" - returning raw string W0507 22:41:17.576725 668555 out.go:424] no arguments passed for "* Verifying Kubernetes components...\n" - returning raw string I0507 22:41:17.579110 668555 out.go:170] * Verifying Kubernetes components... I0507 22:41:17.576756 668555 addons.go:328] enableAddons start: toEnable=map[], additional=[] I0507 22:41:17.579179 668555 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0507 22:41:17.579196 668555 addons.go:55] Setting storage-provisioner=true in profile "bridge-20210507224024-391940" I0507 22:41:17.579213 668555 addons.go:131] Setting addon storage-provisioner=true in "bridge-20210507224024-391940" I0507 22:41:17.576939 668555 cache.go:108] acquiring lock: {Name:mk66f3ed174a0fda2e3a4fd9a235ceef9553bc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0507 22:41:17.579238 668555 addons.go:55] Setting default-storageclass=true in profile "bridge-20210507224024-391940" W0507 22:41:17.579269 668555 addons.go:140] addon storage-provisioner should already be in state true I0507 22:41:17.579270 668555 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-20210507224024-391940" I0507 22:41:17.579289 668555 host.go:66] Checking if "bridge-20210507224024-391940" exists ... I0507 22:41:17.579309 668555 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 exists I0507 22:41:17.579331 668555 cache.go:97] cache image "minikube-local-cache-test:functional-20210507215728-391940" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940" took 2.400306ms I0507 22:41:17.579346 668555 cache.go:81] save to tar file minikube-local-cache-test:functional-20210507215728-391940 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 succeeded I0507 22:41:17.579357 668555 cache.go:88] Successfully saved all images to host disk. I0507 22:41:17.579696 668555 cli_runner.go:115] Run: docker container inspect bridge-20210507224024-391940 --format={{.State.Status}} I0507 22:41:17.579814 668555 cli_runner.go:115] Run: docker container inspect bridge-20210507224024-391940 --format={{.State.Status}} I0507 22:41:17.579898 668555 cli_runner.go:115] Run: docker container inspect bridge-20210507224024-391940 --format={{.State.Status}} I0507 22:41:17.600318 668555 node_ready.go:35] waiting up to 5m0s for node "bridge-20210507224024-391940" to be "Ready" ... I0507 22:41:17.608388 668555 node_ready.go:49] node "bridge-20210507224024-391940" has status "Ready":"True" I0507 22:41:17.608416 668555 node_ready.go:38] duration metric: took 8.056898ms waiting for node "bridge-20210507224024-391940" to be "Ready" ... I0507 22:41:17.608430 668555 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0507 22:41:17.638420 668555 ssh_runner.go:149] Run: sudo crictl images --output json I0507 22:41:17.638467 668555 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210507224024-391940 I0507 22:41:17.638881 668555 addons.go:131] Setting addon default-storageclass=true in "bridge-20210507224024-391940" W0507 22:41:17.638901 668555 addons.go:140] addon default-storageclass should already be in state true I0507 22:41:17.638916 668555 host.go:66] Checking if "bridge-20210507224024-391940" exists ... I0507 22:41:17.639424 668555 cli_runner.go:115] Run: docker container inspect bridge-20210507224024-391940 --format={{.State.Status}} I0507 22:41:17.640971 668555 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace to be "Ready" ... I0507 22:41:17.662334 668555 out.go:170] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0507 22:41:17.662472 668555 addons.go:261] installing /etc/kubernetes/addons/storage-provisioner.yaml I0507 22:41:17.662489 668555 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0507 22:41:17.662544 668555 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210507224024-391940 I0507 22:41:17.692641 668555 addons.go:261] installing /etc/kubernetes/addons/storageclass.yaml I0507 22:41:17.692676 668555 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0507 22:41:17.692747 668555 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210507224024-391940 I0507 22:41:17.699422 668555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33321 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/bridge-20210507224024-391940/id_rsa Username:docker} I0507 22:41:17.713302 668555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33321 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/bridge-20210507224024-391940/id_rsa Username:docker} I0507 22:41:17.747151 668555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33321 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/bridge-20210507224024-391940/id_rsa Username:docker} I0507 22:41:17.803615 668555 containerd.go:567] couldn't find preloaded image for "docker.io/minikube-local-cache-test:functional-20210507215728-391940". assuming images are not preloaded. I0507 22:41:17.803642 668555 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210507215728-391940] I0507 22:41:17.803698 668555 image.go:320] retrieving image: minikube-local-cache-test:functional-20210507215728-391940 I0507 22:41:17.803719 668555 image.go:326] checking repository: index.docker.io/library/minikube-local-cache-test I0507 22:41:17.805339 668555 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0507 22:41:17.835804 668555 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml W0507 22:41:18.046229 668555 image.go:333] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details) I0507 22:41:18.046283 668555 image.go:334] short name: minikube-local-cache-test:functional-20210507215728-391940 I0507 22:41:18.047249 668555 image.go:362] daemon lookup for minikube-local-cache-test:functional-20210507215728-391940: Error response from daemon: reference does not exist I0507 22:41:18.181662 668555 out.go:170] * Enabled addons: storage-provisioner, default-storageclass I0507 22:41:18.181686 668555 addons.go:330] enableAddons completed in 604.943ms W0507 22:41:18.200715 668555 image.go:372] authn lookup for minikube-local-cache-test:functional-20210507215728-391940 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:41:18.346656 668555 image.go:376] remote lookup for minikube-local-cache-test:functional-20210507215728-391940: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:41:18.346695 668555 image.go:98] error retrieve Image minikube-local-cache-test:functional-20210507215728-391940 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:41:18.346721 668555 cache_images.go:106] "minikube-local-cache-test:functional-20210507215728-391940" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210507215728-391940 I0507 22:41:18.346753 668555 cache_images.go:271] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:18.346827 668555 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:18.350194 668555 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940': No such file or directory I0507 22:41:18.350225 668555 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 (5120 bytes) I0507 22:41:18.367355 668555 containerd.go:267] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:18.367400 668555 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:18.477367 668555 cache_images.go:293] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 from cache I0507 22:41:18.477395 668555 cache_images.go:113] Successfully loaded all cached images I0507 22:41:18.477403 668555 cache_images.go:82] LoadImages completed in 673.752742ms I0507 22:41:18.477413 668555 cache_images.go:252] succeeded pushing to: bridge-20210507224024-391940 I0507 22:41:18.477417 668555 cache_images.go:253] failed pushing to: I0507 22:41:17.719681 666230 pod_ready.go:102] pod "coredns-74ff55c5b-z2xcz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-07 22:41:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime: InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]} I0507 22:41:19.721425 666230 pod_ready.go:102] pod "coredns-74ff55c5b-z2xcz" in "kube-system" namespace has status "Ready":"False" I0507 22:41:19.658657 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:21.659073 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:23.659297 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:22.220967 666230 pod_ready.go:102] pod "coredns-74ff55c5b-z2xcz" in "kube-system" namespace has status "Ready":"False" I0507 22:41:24.221731 666230 pod_ready.go:102] pod "coredns-74ff55c5b-z2xcz" in "kube-system" namespace has status "Ready":"False" I0507 22:41:26.720571 666230 pod_ready.go:102] pod "coredns-74ff55c5b-z2xcz" in "kube-system" namespace has status "Ready":"False" I0507 22:41:25.659482 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:27.659817 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" W0507 22:41:29.502582 672811 out.go:424] no arguments passed for " - Generating certificates and keys ..." - returning raw string W0507 22:41:29.502611 672811 out.go:424] no arguments passed for " - Generating certificates and keys ..." - returning raw string I0507 22:41:29.504051 672811 out.go:197] - Generating certificates and keys ... W0507 22:41:29.505275 672811 out.go:424] no arguments passed for " - Booting up control plane ..." - returning raw string W0507 22:41:29.505298 672811 out.go:424] no arguments passed for " - Booting up control plane ..." - returning raw string I0507 22:41:29.506842 672811 out.go:197] - Booting up control plane ... W0507 22:41:29.507828 672811 out.go:424] no arguments passed for " - Configuring RBAC rules ..." - returning raw string W0507 22:41:29.507851 672811 out.go:424] no arguments passed for " - Configuring RBAC rules ..." - returning raw string I0507 22:41:28.721492 666230 pod_ready.go:102] pod "coredns-74ff55c5b-z2xcz" in "kube-system" namespace has status "Ready":"False" I0507 22:41:29.720830 666230 pod_ready.go:92] pod "coredns-74ff55c5b-z2xcz" in "kube-system" namespace has status "Ready":"True" I0507 22:41:29.720855 666230 pod_ready.go:81] duration metric: took 14.016352375s waiting for pod "coredns-74ff55c5b-z2xcz" in "kube-system" namespace to be "Ready" ... I0507 22:41:29.720864 666230 pod_ready.go:78] waiting up to 5m0s for pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace to be "Ready" ... I0507 22:41:29.509381 672811 out.go:197] - Configuring RBAC rules ... I0507 22:41:29.511102 672811 cni.go:89] network plugin configured as "kubenet", returning disabled I0507 22:41:29.511144 672811 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0507 22:41:29.511202 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:29.511202 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=c31bd57f93d45726e4bd30607374f8c720e70e95 minikube.k8s.io/name=kubenet-20210507224052-391940 minikube.k8s.io/updated_at=2021_05_07T22_41_29_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:33.297935 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:36.412887 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:36.452032 672811 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (6.940764477s) I0507 22:41:36.452084 672811 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=c31bd57f93d45726e4bd30607374f8c720e70e95 minikube.k8s.io/name=kubenet-20210507224052-391940 minikube.k8s.io/updated_at=2021_05_07T22_41_29_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (6.940777019s) I0507 22:41:36.452120 672811 ssh_runner.go:189] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (6.9409632s) I0507 22:41:36.452130 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:36.452135 672811 ops.go:34] apiserver oom_adj: -16 I0507 22:41:37.133448 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:37.634119 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:36.415857 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:38.659853 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:38.730877 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:41.230841 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:38.134120 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:38.633786 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:39.133311 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:39.633524 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:40.134249 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:40.633580 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:41.133642 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:41.633685 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:42.133984 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:42.633334 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:41.158513 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:43.158956 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:45.895251 634245 system_pods.go:86] 7 kube-system pods found I0507 22:41:45.895286 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:41:45.895294 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:41:45.895300 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:41:45.895306 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:41:45.895313 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:41:45.895319 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:41:45.895324 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:41:45.895351 634245 retry.go:31] will retry after 37.970670965s: missing components: kube-dns I0507 22:41:43.730266 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:45.730693 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:43.133263 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:43.634078 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:44.133696 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:44.633466 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:45.133959 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:45.633643 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:46.133797 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:46.634042 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:47.133888 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:47.634155 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:45.159699 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:47.658653 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:48.133838 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:48.633584 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:49.134019 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:49.633305 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:50.133859 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:50.634269 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:51.133941 672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig I0507 22:41:51.198474 672811 kubeadm.go:977] duration metric: took 21.687320394s to wait for elevateKubeSystemPrivileges. I0507 22:41:51.198504 672811 kubeadm.go:383] StartCluster complete in 38.445622759s I0507 22:41:51.198526 672811 settings.go:142] acquiring lock: {Name:mkbc12d45ea1a96167acb2e3885011008220fc1e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:51.198634 672811 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig I0507 22:41:51.201538 672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig: {Name:mk53c460e0a047a0806c95f27e36717b9bf9f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0507 22:41:51.718321 672811 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubenet-20210507224052-391940" rescaled to 1 I0507 22:41:51.718369 672811 start.go:201] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} W0507 22:41:51.718401 672811 out.go:424] no arguments passed for "* Verifying Kubernetes components...\n" - returning raw string W0507 22:41:51.718425 672811 out.go:424] no arguments passed for "* Verifying Kubernetes components...\n" - returning raw string I0507 22:41:51.720457 672811 out.go:170] * Verifying Kubernetes components... I0507 22:41:51.720524 672811 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0507 22:41:51.718471 672811 addons.go:328] enableAddons start: toEnable=map[], additional=[] I0507 22:41:51.720595 672811 addons.go:55] Setting storage-provisioner=true in profile "kubenet-20210507224052-391940" I0507 22:41:51.718753 672811 cache.go:108] acquiring lock: {Name:mk66f3ed174a0fda2e3a4fd9a235ceef9553bc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0507 22:41:51.720621 672811 addons.go:55] Setting default-storageclass=true in profile "kubenet-20210507224052-391940" I0507 22:41:51.720638 672811 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubenet-20210507224052-391940" I0507 22:41:51.720676 672811 addons.go:131] Setting addon storage-provisioner=true in "kubenet-20210507224052-391940" W0507 22:41:51.720694 672811 addons.go:140] addon storage-provisioner should already be in state true I0507 22:41:51.720700 672811 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 exists I0507 22:41:51.720716 672811 host.go:66] Checking if "kubenet-20210507224052-391940" exists ... I0507 22:41:51.720721 672811 cache.go:97] cache image "minikube-local-cache-test:functional-20210507215728-391940" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940" took 1.979419ms I0507 22:41:51.720737 672811 cache.go:81] save to tar file minikube-local-cache-test:functional-20210507215728-391940 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 succeeded I0507 22:41:51.720751 672811 cache.go:88] Successfully saved all images to host disk. I0507 22:41:51.721038 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}} I0507 22:41:51.721675 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}} I0507 22:41:51.721703 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}} I0507 22:41:51.740835 672811 node_ready.go:35] waiting up to 5m0s for node "kubenet-20210507224052-391940" to be "Ready" ... I0507 22:41:51.745077 672811 node_ready.go:49] node "kubenet-20210507224052-391940" has status "Ready":"True" I0507 22:41:51.745099 672811 node_ready.go:38] duration metric: took 4.233416ms waiting for node "kubenet-20210507224052-391940" to be "Ready" ... I0507 22:41:51.745110 672811 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0507 22:41:51.756875 672811 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace to be "Ready" ... I0507 22:41:51.783594 672811 addons.go:131] Setting addon default-storageclass=true in "kubenet-20210507224052-391940" W0507 22:41:51.783619 672811 addons.go:140] addon default-storageclass should already be in state true I0507 22:41:51.783637 672811 host.go:66] Checking if "kubenet-20210507224052-391940" exists ... I0507 22:41:51.784146 672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}} I0507 22:41:51.788959 672811 ssh_runner.go:149] Run: sudo crictl images --output json I0507 22:41:51.789007 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:41:48.229607 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:50.230260 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:51.792078 672811 out.go:170] - Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0507 22:41:51.792203 672811 addons.go:261] installing /etc/kubernetes/addons/storage-provisioner.yaml I0507 22:41:51.792220 672811 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0507 22:41:51.792278 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:41:51.832922 672811 addons.go:261] installing /etc/kubernetes/addons/storageclass.yaml I0507 22:41:51.832950 672811 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0507 22:41:51.833006 672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940 I0507 22:41:51.843625 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:41:51.848427 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:41:51.881629 672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker} I0507 22:41:51.945739 672811 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0507 22:41:51.956581 672811 containerd.go:567] couldn't find preloaded image for "docker.io/minikube-local-cache-test:functional-20210507215728-391940". assuming images are not preloaded. I0507 22:41:51.956604 672811 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210507215728-391940] I0507 22:41:51.956650 672811 image.go:320] retrieving image: minikube-local-cache-test:functional-20210507215728-391940 I0507 22:41:51.956698 672811 image.go:326] checking repository: index.docker.io/library/minikube-local-cache-test I0507 22:41:51.972691 672811 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml W0507 22:41:52.183545 672811 image.go:333] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details) I0507 22:41:52.183604 672811 image.go:334] short name: minikube-local-cache-test:functional-20210507215728-391940 I0507 22:41:52.184655 672811 image.go:362] daemon lookup for minikube-local-cache-test:functional-20210507215728-391940: Error response from daemon: reference does not exist W0507 22:41:52.330654 672811 image.go:372] authn lookup for minikube-local-cache-test:functional-20210507215728-391940 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:41:52.347904 672811 out.go:170] * Enabled addons: storage-provisioner, default-storageclass I0507 22:41:52.347937 672811 addons.go:330] enableAddons completed in 629.490714ms I0507 22:41:52.481399 672811 image.go:376] remote lookup for minikube-local-cache-test:functional-20210507215728-391940: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:41:52.481439 672811 image.go:98] error retrieve Image minikube-local-cache-test:functional-20210507215728-391940 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] I0507 22:41:52.481470 672811 cache_images.go:106] "minikube-local-cache-test:functional-20210507215728-391940" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210507215728-391940 I0507 22:41:52.481491 672811 cache_images.go:271] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:52.481574 672811 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:52.485013 672811 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940': No such file or directory I0507 22:41:52.485041 672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 (5120 bytes) I0507 22:41:52.502314 672811 containerd.go:267] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:52.502378 672811 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 I0507 22:41:52.612982 672811 cache_images.go:293] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 from cache I0507 22:41:52.613030 672811 cache_images.go:113] Successfully loaded all cached images I0507 22:41:52.613038 672811 cache_images.go:82] LoadImages completed in 656.425091ms I0507 22:41:52.613050 672811 cache_images.go:252] succeeded pushing to: kubenet-20210507224052-391940 I0507 22:41:52.613059 672811 cache_images.go:253] failed pushing to: I0507 22:41:49.658707 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:51.659079 668555 pod_ready.go:102] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"False" I0507 22:41:53.659071 668555 pod_ready.go:92] pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace has status "Ready":"True" I0507 22:41:53.659096 668555 pod_ready.go:81] duration metric: took 36.018090678s waiting for pod "coredns-74ff55c5b-kn5r7" in "kube-system" namespace to be "Ready" ... I0507 22:41:53.659109 668555 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace to be "Ready" ... I0507 22:41:52.231130 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:54.730057 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:56.730962 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:53.769362 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:41:55.770429 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:41:55.669145 668555 pod_ready.go:102] pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace has status "Ready":"False" I0507 22:41:57.669440 668555 pod_ready.go:102] pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace has status "Ready":"False" I0507 22:41:59.230483 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:42:01.732105 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:41:58.269524 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:00.269630 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:02.269711 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:00.168541 668555 pod_ready.go:102] pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace has status "Ready":"False" I0507 22:42:02.169036 668555 pod_ready.go:102] pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace has status "Ready":"False" I0507 22:42:04.230071 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:42:06.731034 666230 pod_ready.go:102] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:42:04.774753 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:07.270739 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:04.169120 668555 pod_ready.go:102] pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace has status "Ready":"False" I0507 22:42:06.668156 668555 pod_ready.go:102] pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace has status "Ready":"False" I0507 22:42:08.668465 668555 pod_ready.go:102] pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace has status "Ready":"False" I0507 22:42:08.229893 666230 pod_ready.go:92] pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:42:08.229920 666230 pod_ready.go:81] duration metric: took 38.509048055s waiting for pod "etcd-kindnet-20210507224017-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:08.229938 666230 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-kindnet-20210507224017-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:08.234246 666230 pod_ready.go:92] pod "kube-apiserver-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:42:08.234268 666230 pod_ready.go:81] duration metric: took 4.3205ms waiting for pod "kube-apiserver-kindnet-20210507224017-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:08.234279 666230 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-kindnet-20210507224017-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:08.237969 666230 pod_ready.go:92] pod "kube-controller-manager-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:42:08.237985 666230 pod_ready.go:81] duration metric: took 3.697005ms waiting for pod "kube-controller-manager-kindnet-20210507224017-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:08.237994 666230 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-gdfcx" in "kube-system" namespace to be "Ready" ... I0507 22:42:08.241518 666230 pod_ready.go:92] pod "kube-proxy-gdfcx" in "kube-system" namespace has status "Ready":"True" I0507 22:42:08.241532 666230 pod_ready.go:81] duration metric: took 3.532307ms waiting for pod "kube-proxy-gdfcx" in "kube-system" namespace to be "Ready" ... I0507 22:42:08.241539 666230 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-kindnet-20210507224017-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:10.249511 666230 pod_ready.go:92] pod "kube-scheduler-kindnet-20210507224017-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:42:10.249539 666230 pod_ready.go:81] duration metric: took 2.007992228s waiting for pod "kube-scheduler-kindnet-20210507224017-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:10.249553 666230 pod_ready.go:38] duration metric: took 54.554803875s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0507 22:42:10.249590 666230 api_server.go:50] waiting for apiserver process to appear ... I0507 22:42:10.249619 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]} I0507 22:42:10.249671 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0507 22:42:10.274262 666230 cri.go:76] found id: "b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f" I0507 22:42:10.274292 666230 cri.go:76] found id: "" I0507 22:42:10.274299 666230 logs.go:270] 1 containers: [b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f] I0507 22:42:10.274342 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:10.277437 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]} I0507 22:42:10.277503 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd I0507 22:42:10.298859 666230 cri.go:76] found id: "28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460" I0507 22:42:10.298880 666230 cri.go:76] found id: "" I0507 22:42:10.298888 666230 logs.go:270] 1 containers: [28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460] I0507 22:42:10.298941 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:10.301705 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]} I0507 22:42:10.301780 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns I0507 22:42:10.322564 666230 cri.go:76] found id: "e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618" I0507 22:42:10.322584 666230 cri.go:76] found id: "" I0507 22:42:10.322592 666230 logs.go:270] 1 containers: [e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618] I0507 22:42:10.322631 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:10.325329 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]} I0507 22:42:10.325371 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0507 22:42:10.345651 666230 cri.go:76] found id: "2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39" I0507 22:42:10.345673 666230 cri.go:76] found id: "" I0507 22:42:10.345680 666230 logs.go:270] 1 containers: [2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39] I0507 22:42:10.345712 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:10.348402 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]} I0507 22:42:10.348458 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy I0507 22:42:10.368647 666230 cri.go:76] found id: "aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3" I0507 22:42:10.368666 666230 cri.go:76] found id: "" I0507 22:42:10.368671 666230 logs.go:270] 1 containers: [aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3] I0507 22:42:10.368702 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:10.371259 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]} I0507 22:42:10.371312 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0507 22:42:10.391161 666230 cri.go:76] found id: "" I0507 22:42:10.391182 666230 logs.go:270] 0 containers: [] W0507 22:42:10.391192 666230 logs.go:272] No container was found matching "kubernetes-dashboard" I0507 22:42:10.391199 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]} I0507 22:42:10.391241 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0507 22:42:10.412101 666230 cri.go:76] found id: "840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9" I0507 22:42:10.412122 666230 cri.go:76] found id: "" I0507 22:42:10.412128 666230 logs.go:270] 1 containers: [840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9] I0507 22:42:10.412163 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:10.414725 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]} I0507 22:42:10.414791 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0507 22:42:10.434644 666230 cri.go:76] found id: "953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5" I0507 22:42:10.434663 666230 cri.go:76] found id: "" I0507 22:42:10.434668 666230 logs.go:270] 1 containers: [953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5] I0507 22:42:10.434700 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:10.437410 666230 logs.go:123] Gathering logs for coredns [e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618] ... I0507 22:42:10.437431 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618" I0507 22:42:10.458525 666230 logs.go:123] Gathering logs for kube-scheduler [2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39] ... I0507 22:42:10.458559 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39" I0507 22:42:10.481718 666230 logs.go:123] Gathering logs for storage-provisioner [840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9] ... I0507 22:42:10.481741 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9" I0507 22:42:10.502806 666230 logs.go:123] Gathering logs for kube-controller-manager [953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5] ... I0507 22:42:10.502827 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5" I0507 22:42:10.544428 666230 logs.go:123] Gathering logs for kubelet ... I0507 22:42:10.544453 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0507 22:42:10.597686 666230 logs.go:123] Gathering logs for dmesg ... I0507 22:42:10.597719 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0507 22:42:10.621630 666230 logs.go:123] Gathering logs for describe nodes ... I0507 22:42:10.621655 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0507 22:42:10.715373 666230 logs.go:123] Gathering logs for etcd [28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460] ... I0507 22:42:10.715412 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460" I0507 22:42:10.744332 666230 logs.go:123] Gathering logs for containerd ... I0507 22:42:10.744360 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400" I0507 22:42:10.782615 666230 logs.go:123] Gathering logs for container status ... I0507 22:42:10.782646 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0507 22:42:10.808422 666230 logs.go:123] Gathering logs for kube-apiserver [b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f] ... I0507 22:42:10.808450 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f" I0507 22:42:10.842939 666230 logs.go:123] Gathering logs for kube-proxy [aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3] ... I0507 22:42:10.842968 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3" I0507 22:42:09.770268 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:12.270101 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:10.668771 668555 pod_ready.go:102] pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace has status "Ready":"False" I0507 22:42:13.169561 668555 pod_ready.go:102] pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace has status "Ready":"False" I0507 22:42:13.366885 666230 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0507 22:42:13.388981 666230 api_server.go:70] duration metric: took 1m7.219905852s to wait for apiserver process to appear ... I0507 22:42:13.389006 666230 api_server.go:86] waiting for apiserver healthz status ... I0507 22:42:13.389035 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]} I0507 22:42:13.389087 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0507 22:42:13.411483 666230 cri.go:76] found id: "b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f" I0507 22:42:13.411514 666230 cri.go:76] found id: "" I0507 22:42:13.411526 666230 logs.go:270] 1 containers: [b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f] I0507 22:42:13.411571 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:13.414370 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]} I0507 22:42:13.414418 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd I0507 22:42:13.435282 666230 cri.go:76] found id: "28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460" I0507 22:42:13.435303 666230 cri.go:76] found id: "" I0507 22:42:13.435310 666230 logs.go:270] 1 containers: [28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460] I0507 22:42:13.435357 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:13.438094 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]} I0507 22:42:13.438144 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns I0507 22:42:13.459295 666230 cri.go:76] found id: "e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618" I0507 22:42:13.459312 666230 cri.go:76] found id: "" I0507 22:42:13.459318 666230 logs.go:270] 1 containers: [e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618] I0507 22:42:13.459351 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:13.462157 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]} I0507 22:42:13.462204 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0507 22:42:13.482519 666230 cri.go:76] found id: "2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39" I0507 22:42:13.482541 666230 cri.go:76] found id: "" I0507 22:42:13.482548 666230 logs.go:270] 1 containers: [2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39] I0507 22:42:13.482588 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:13.485169 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]} I0507 22:42:13.485219 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy I0507 22:42:13.504984 666230 cri.go:76] found id: "aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3" I0507 22:42:13.505005 666230 cri.go:76] found id: "" I0507 22:42:13.505013 666230 logs.go:270] 1 containers: [aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3] I0507 22:42:13.505051 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:13.507814 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]} I0507 22:42:13.507868 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0507 22:42:13.528188 666230 cri.go:76] found id: "" I0507 22:42:13.528205 666230 logs.go:270] 0 containers: [] W0507 22:42:13.528211 666230 logs.go:272] No container was found matching "kubernetes-dashboard" I0507 22:42:13.528218 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]} I0507 22:42:13.528269 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0507 22:42:13.548898 666230 cri.go:76] found id: "840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9" I0507 22:42:13.548939 666230 cri.go:76] found id: "" I0507 22:42:13.548946 666230 logs.go:270] 1 containers: [840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9] I0507 22:42:13.548982 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:13.551706 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]} I0507 22:42:13.551780 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0507 22:42:13.572473 666230 cri.go:76] found id: "953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5" I0507 22:42:13.572493 666230 cri.go:76] found id: "" I0507 22:42:13.572503 666230 logs.go:270] 1 containers: [953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5] I0507 22:42:13.572538 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:13.575210 666230 logs.go:123] Gathering logs for kube-scheduler [2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39] ... I0507 22:42:13.575230 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39" I0507 22:42:13.599799 666230 logs.go:123] Gathering logs for describe nodes ... I0507 22:42:13.599821 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0507 22:42:13.686130 666230 logs.go:123] Gathering logs for kube-apiserver [b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f] ... I0507 22:42:13.686156 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f" I0507 22:42:13.723776 666230 logs.go:123] Gathering logs for coredns [e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618] ... I0507 22:42:13.723804 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618" I0507 22:42:13.746725 666230 logs.go:123] Gathering logs for kube-proxy [aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3] ... I0507 22:42:13.746749 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3" I0507 22:42:13.772353 666230 logs.go:123] Gathering logs for storage-provisioner [840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9] ... I0507 22:42:13.772379 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9" I0507 22:42:13.795023 666230 logs.go:123] Gathering logs for kube-controller-manager [953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5] ... I0507 22:42:13.795049 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5" I0507 22:42:13.836297 666230 logs.go:123] Gathering logs for containerd ... I0507 22:42:13.836322 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400" I0507 22:42:13.869424 666230 logs.go:123] Gathering logs for container status ... I0507 22:42:13.869455 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0507 22:42:13.895634 666230 logs.go:123] Gathering logs for kubelet ... I0507 22:42:13.895659 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0507 22:42:13.949046 666230 logs.go:123] Gathering logs for dmesg ... I0507 22:42:13.949069 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0507 22:42:13.970628 666230 logs.go:123] Gathering logs for etcd [28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460] ... I0507 22:42:13.970651 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460" I0507 22:42:16.499411 666230 api_server.go:223] Checking apiserver healthz at https://192.168.76.2:8443/healthz ... I0507 22:42:16.504744 666230 api_server.go:249] https://192.168.76.2:8443/healthz returned 200: ok I0507 22:42:16.505727 666230 api_server.go:139] control plane version: v1.20.2 I0507 22:42:16.505755 666230 api_server.go:129] duration metric: took 3.116741389s to wait for apiserver health ... I0507 22:42:16.505765 666230 system_pods.go:43] waiting for kube-system pods to appear ... I0507 22:42:16.505792 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]} I0507 22:42:16.505848 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0507 22:42:16.529238 666230 cri.go:76] found id: "b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f" I0507 22:42:16.529260 666230 cri.go:76] found id: "" I0507 22:42:16.529267 666230 logs.go:270] 1 containers: [b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f] I0507 22:42:16.529316 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:16.532427 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]} I0507 22:42:16.532482 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd I0507 22:42:16.553627 666230 cri.go:76] found id: "28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460" I0507 22:42:16.553647 666230 cri.go:76] found id: "" I0507 22:42:16.553653 666230 logs.go:270] 1 containers: [28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460] I0507 22:42:16.553704 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:16.556501 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]} I0507 22:42:16.556558 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns I0507 22:42:16.577745 666230 cri.go:76] found id: "e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618" I0507 22:42:16.577767 666230 cri.go:76] found id: "" I0507 22:42:16.577774 666230 logs.go:270] 1 containers: [e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618] I0507 22:42:16.577811 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:16.580607 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]} I0507 22:42:16.580664 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0507 22:42:16.601257 666230 cri.go:76] found id: "2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39" I0507 22:42:16.601276 666230 cri.go:76] found id: "" I0507 22:42:16.601283 666230 logs.go:270] 1 containers: [2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39] I0507 22:42:16.601322 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:16.604118 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]} I0507 22:42:16.604179 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy I0507 22:42:16.625270 666230 cri.go:76] found id: "aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3" I0507 22:42:16.625287 666230 cri.go:76] found id: "" I0507 22:42:16.625295 666230 logs.go:270] 1 containers: [aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3] I0507 22:42:16.625335 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:16.628041 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]} I0507 22:42:16.628106 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0507 22:42:16.649884 666230 cri.go:76] found id: "" I0507 22:42:16.649905 666230 logs.go:270] 0 containers: [] W0507 22:42:16.649913 666230 logs.go:272] No container was found matching "kubernetes-dashboard" I0507 22:42:16.649930 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]} I0507 22:42:16.649977 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0507 22:42:16.674957 666230 cri.go:76] found id: "840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9" I0507 22:42:16.674976 666230 cri.go:76] found id: "" I0507 22:42:16.674983 666230 logs.go:270] 1 containers: [840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9] I0507 22:42:16.675021 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:16.678054 666230 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]} I0507 22:42:16.678109 666230 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0507 22:42:16.699657 666230 cri.go:76] found id: "953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5" I0507 22:42:16.699673 666230 cri.go:76] found id: "" I0507 22:42:16.699679 666230 logs.go:270] 1 containers: [953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5] I0507 22:42:16.699723 666230 ssh_runner.go:149] Run: which crictl I0507 22:42:16.702335 666230 logs.go:123] Gathering logs for etcd [28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460] ... I0507 22:42:16.702360 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28685b137bd0ac8b838f8bb968a2916ea470d86585b6758f1957a1bf1a4a6460" I0507 22:42:16.730493 666230 logs.go:123] Gathering logs for kube-proxy [aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3] ... I0507 22:42:16.730528 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aafd3d52e8a99a65b993f44a0f2e0bee169d568a5b1e64c7788d6f3cca79a9f3" I0507 22:42:16.758194 666230 logs.go:123] Gathering logs for kube-controller-manager [953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5] ... I0507 22:42:16.758218 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 953367e4f9711500cf9c55a2e3a88297fe1ea571b90eb9f6878ea3dcf8e47be5" I0507 22:42:16.804178 666230 logs.go:123] Gathering logs for container status ... I0507 22:42:16.804206 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0507 22:42:16.829372 666230 logs.go:123] Gathering logs for dmesg ... I0507 22:42:16.829402 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0507 22:42:16.851983 666230 logs.go:123] Gathering logs for describe nodes ... I0507 22:42:16.852012 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0507 22:42:16.951387 666230 logs.go:123] Gathering logs for kube-apiserver [b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f] ... I0507 22:42:16.951415 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b45772b9a6fadede8c01a34c8caf8013fd6c303c5be1a66647b115bf220cff8f" I0507 22:42:16.990994 666230 logs.go:123] Gathering logs for coredns [e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618] ... I0507 22:42:16.991027 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ca7efce382fb7e29ea2ca646e99d72bcf5f597be9c8548c36cacf707017618" I0507 22:42:17.013386 666230 logs.go:123] Gathering logs for kube-scheduler [2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39] ... I0507 22:42:17.013418 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2afafd7cebc98f4052923c5de24cbb19b630d6720460c8528aa2b62dcbf90d39" I0507 22:42:17.038584 666230 logs.go:123] Gathering logs for storage-provisioner [840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9] ... I0507 22:42:17.038612 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 840a1fdad075b9db0cd8c37a67300746e67108f3c525713ad6b60997af3577e9" I0507 22:42:17.060650 666230 logs.go:123] Gathering logs for containerd ... I0507 22:42:17.060673 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400" I0507 22:42:17.093614 666230 logs.go:123] Gathering logs for kubelet ... I0507 22:42:17.093640 666230 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0507 22:42:14.769932 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:17.269374 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:15.669185 668555 pod_ready.go:102] pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace has status "Ready":"False" I0507 22:42:17.166937 668555 pod_ready.go:97] error getting pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-wdngz" not found I0507 22:42:17.166970 668555 pod_ready.go:81] duration metric: took 23.507854097s waiting for pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace to be "Ready" ... E0507 22:42:17.166982 668555 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-74ff55c5b-wdngz" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-wdngz" not found I0507 22:42:17.166991 668555 pod_ready.go:78] waiting up to 5m0s for pod "etcd-bridge-20210507224024-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:19.658416 666230 system_pods.go:59] 8 kube-system pods found I0507 22:42:19.658462 666230 system_pods.go:61] "coredns-74ff55c5b-z2xcz" [32f270e2-76b8-461c-b5ad-4a27412fdfc0] Running I0507 22:42:19.658468 666230 system_pods.go:61] "etcd-kindnet-20210507224017-391940" [8aa1fb09-6fc9-49e5-bde4-381bd5c8b572] Running I0507 22:42:19.658473 666230 system_pods.go:61] "kindnet-q67jp" [fa4108a6-8fc0-4ba5-ba81-ea32d753a85a] Running I0507 22:42:19.658478 666230 system_pods.go:61] "kube-apiserver-kindnet-20210507224017-391940" [40d5124a-6495-4040-9c07-a81af5d89ccb] Running I0507 22:42:19.658493 666230 system_pods.go:61] "kube-controller-manager-kindnet-20210507224017-391940" [9525d6cc-6900-471f-bd5b-7d5bc17f7ddc] Running I0507 22:42:19.658501 666230 system_pods.go:61] "kube-proxy-gdfcx" [8a5c1984-a141-4ab0-ae51-fd74fda2c5db] Running I0507 22:42:19.658506 666230 system_pods.go:61] "kube-scheduler-kindnet-20210507224017-391940" [daffa333-07f9-4c17-9430-fb63e656f748] Running I0507 22:42:19.658512 666230 system_pods.go:61] "storage-provisioner" [efd5252a-5fd4-481b-9795-a34a2030d342] Running I0507 22:42:19.658517 666230 system_pods.go:74] duration metric: took 3.152746713s to wait for pod list to return data ... I0507 22:42:19.658527 666230 default_sa.go:34] waiting for default service account to be created ... I0507 22:42:19.660861 666230 default_sa.go:45] found service account: "default" I0507 22:42:19.660881 666230 default_sa.go:55] duration metric: took 2.3459ms for default service account to be created ... I0507 22:42:19.660890 666230 system_pods.go:116] waiting for k8s-apps to be running ... I0507 22:42:19.665119 666230 system_pods.go:86] 8 kube-system pods found I0507 22:42:19.665143 666230 system_pods.go:89] "coredns-74ff55c5b-z2xcz" [32f270e2-76b8-461c-b5ad-4a27412fdfc0] Running I0507 22:42:19.665149 666230 system_pods.go:89] "etcd-kindnet-20210507224017-391940" [8aa1fb09-6fc9-49e5-bde4-381bd5c8b572] Running I0507 22:42:19.665155 666230 system_pods.go:89] "kindnet-q67jp" [fa4108a6-8fc0-4ba5-ba81-ea32d753a85a] Running I0507 22:42:19.665162 666230 system_pods.go:89] "kube-apiserver-kindnet-20210507224017-391940" [40d5124a-6495-4040-9c07-a81af5d89ccb] Running I0507 22:42:19.665169 666230 system_pods.go:89] "kube-controller-manager-kindnet-20210507224017-391940" [9525d6cc-6900-471f-bd5b-7d5bc17f7ddc] Running I0507 22:42:19.665174 666230 system_pods.go:89] "kube-proxy-gdfcx" [8a5c1984-a141-4ab0-ae51-fd74fda2c5db] Running I0507 22:42:19.665181 666230 system_pods.go:89] "kube-scheduler-kindnet-20210507224017-391940" [daffa333-07f9-4c17-9430-fb63e656f748] Running I0507 22:42:19.665191 666230 system_pods.go:89] "storage-provisioner" [efd5252a-5fd4-481b-9795-a34a2030d342] Running I0507 22:42:19.665197 666230 system_pods.go:126] duration metric: took 4.302486ms to wait for k8s-apps to be running ... I0507 22:42:19.665207 666230 system_svc.go:44] waiting for kubelet service to be running .... I0507 22:42:19.665246 666230 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0507 22:42:19.675849 666230 system_svc.go:56] duration metric: took 10.634792ms WaitForService to wait for kubelet. I0507 22:42:19.675872 666230 kubeadm.go:538] duration metric: took 1m13.506802153s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ... I0507 22:42:19.675900 666230 node_conditions.go:102] verifying NodePressure condition ... I0507 22:42:19.679056 666230 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki I0507 22:42:19.679087 666230 node_conditions.go:123] node cpu capacity is 8 I0507 22:42:19.679106 666230 node_conditions.go:105] duration metric: took 3.19959ms to run NodePressure ... I0507 22:42:19.679119 666230 start.go:206] waiting for startup goroutines ... I0507 22:42:19.723166 666230 start.go:460] kubectl: 1.20.5, cluster: 1.20.2 (minor skew: 0) I0507 22:42:19.725682 666230 out.go:170] * Done! kubectl is now configured to use "kindnet-20210507224017-391940" cluster and "default" namespace by default I0507 22:42:19.770111 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:22.269528 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:19.176854 668555 pod_ready.go:102] pod "etcd-bridge-20210507224024-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:42:21.177400 668555 pod_ready.go:102] pod "etcd-bridge-20210507224024-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:42:23.676907 668555 pod_ready.go:102] pod "etcd-bridge-20210507224024-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:42:23.871426 634245 system_pods.go:86] 7 kube-system pods found I0507 22:42:23.871460 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:42:23.871466 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:42:23.871472 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:42:23.871476 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:42:23.871481 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:42:23.871485 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:42:23.871489 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:42:23.871513 634245 retry.go:31] will retry after 47.568379235s: missing components: kube-dns I0507 22:42:24.769876 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:27.269737 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:25.677317 668555 pod_ready.go:102] pod "etcd-bridge-20210507224024-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:42:28.176609 668555 pod_ready.go:92] pod "etcd-bridge-20210507224024-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:42:28.176637 668555 pod_ready.go:81] duration metric: took 11.009627545s waiting for pod "etcd-bridge-20210507224024-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:28.176650 668555 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-bridge-20210507224024-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:28.180369 668555 pod_ready.go:92] pod "kube-apiserver-bridge-20210507224024-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:42:28.180384 668555 pod_ready.go:81] duration metric: took 3.725861ms waiting for pod "kube-apiserver-bridge-20210507224024-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:28.180393 668555 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-bridge-20210507224024-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:29.772420 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:32.269539 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:30.189339 668555 pod_ready.go:102] pod "kube-controller-manager-bridge-20210507224024-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:42:32.189604 668555 pod_ready.go:102] pod "kube-controller-manager-bridge-20210507224024-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:42:34.269995 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:36.769591 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:34.189670 668555 pod_ready.go:102] pod "kube-controller-manager-bridge-20210507224024-391940" in "kube-system" namespace has status "Ready":"False" I0507 22:42:35.190239 668555 pod_ready.go:92] pod "kube-controller-manager-bridge-20210507224024-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:42:35.190267 668555 pod_ready.go:81] duration metric: took 7.009866995s waiting for pod "kube-controller-manager-bridge-20210507224024-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:35.190278 668555 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-ws99c" in "kube-system" namespace to be "Ready" ... I0507 22:42:35.194565 668555 pod_ready.go:92] pod "kube-proxy-ws99c" in "kube-system" namespace has status "Ready":"True" I0507 22:42:35.194581 668555 pod_ready.go:81] duration metric: took 4.296698ms waiting for pod "kube-proxy-ws99c" in "kube-system" namespace to be "Ready" ... I0507 22:42:35.194590 668555 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-bridge-20210507224024-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:35.198344 668555 pod_ready.go:92] pod "kube-scheduler-bridge-20210507224024-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:42:35.198364 668555 pod_ready.go:81] duration metric: took 3.766536ms waiting for pod "kube-scheduler-bridge-20210507224024-391940" in "kube-system" namespace to be "Ready" ... I0507 22:42:35.198378 668555 pod_ready.go:38] duration metric: took 1m17.589933355s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0507 22:42:35.198402 668555 api_server.go:50] waiting for apiserver process to appear ... I0507 22:42:35.198426 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]} I0507 22:42:35.198477 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0507 22:42:35.226005 668555 cri.go:76] found id: "6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef" I0507 22:42:35.226042 668555 cri.go:76] found id: "" I0507 22:42:35.226050 668555 logs.go:270] 1 containers: [6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef] I0507 22:42:35.226104 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:35.229189 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]} I0507 22:42:35.229258 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd I0507 22:42:35.254495 668555 cri.go:76] found id: "e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7" I0507 22:42:35.254531 668555 cri.go:76] found id: "" I0507 22:42:35.254540 668555 logs.go:270] 1 containers: [e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7] I0507 22:42:35.254607 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:35.257794 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]} I0507 22:42:35.257871 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns I0507 22:42:35.281886 668555 cri.go:76] found id: "2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead" I0507 22:42:35.281909 668555 cri.go:76] found id: "" I0507 22:42:35.281916 668555 logs.go:270] 1 containers: [2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead] I0507 22:42:35.281955 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:35.284915 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]} I0507 22:42:35.284966 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0507 22:42:35.313845 668555 cri.go:76] found id: "41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a" I0507 22:42:35.313930 668555 cri.go:76] found id: "" I0507 22:42:35.313937 668555 logs.go:270] 1 containers: [41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a] I0507 22:42:35.313999 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:35.318156 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]} I0507 22:42:35.318222 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy I0507 22:42:35.342057 668555 cri.go:76] found id: "bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163" I0507 22:42:35.342091 668555 cri.go:76] found id: "" I0507 22:42:35.342099 668555 logs.go:270] 1 containers: [bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163] I0507 22:42:35.342146 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:35.345153 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]} I0507 22:42:35.345219 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0507 22:42:35.368430 668555 cri.go:76] found id: "" I0507 22:42:35.368448 668555 logs.go:270] 0 containers: [] W0507 22:42:35.368454 668555 logs.go:272] No container was found matching "kubernetes-dashboard" I0507 22:42:35.368460 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]} I0507 22:42:35.368508 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0507 22:42:35.389241 668555 cri.go:76] found id: "ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac" I0507 22:42:35.389258 668555 cri.go:76] found id: "" I0507 22:42:35.389266 668555 logs.go:270] 1 containers: [ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac] I0507 22:42:35.389314 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:35.392216 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]} I0507 22:42:35.392265 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0507 22:42:35.413738 668555 cri.go:76] found id: "61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf" I0507 22:42:35.413759 668555 cri.go:76] found id: "" I0507 22:42:35.413765 668555 logs.go:270] 1 containers: [61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf] I0507 22:42:35.413808 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:35.416697 668555 logs.go:123] Gathering logs for kube-apiserver [6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef] ... I0507 22:42:35.416715 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef" I0507 22:42:35.450753 668555 logs.go:123] Gathering logs for etcd [e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7] ... I0507 22:42:35.450782 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7" I0507 22:42:35.476276 668555 logs.go:123] Gathering logs for kube-proxy [bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163] ... I0507 22:42:35.476299 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163" I0507 22:42:35.498825 668555 logs.go:123] Gathering logs for storage-provisioner [ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac] ... I0507 22:42:35.498853 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac" I0507 22:42:35.521010 668555 logs.go:123] Gathering logs for kube-controller-manager [61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf] ... I0507 22:42:35.521032 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf" I0507 22:42:35.552384 668555 logs.go:123] Gathering logs for containerd ... I0507 22:42:35.552410 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400" I0507 22:42:35.585101 668555 logs.go:123] Gathering logs for container status ... I0507 22:42:35.585126 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0507 22:42:35.608966 668555 logs.go:123] Gathering logs for dmesg ... I0507 22:42:35.608989 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0507 22:42:35.629842 668555 logs.go:123] Gathering logs for describe nodes ... I0507 22:42:35.629862 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0507 22:42:35.716415 668555 logs.go:123] Gathering logs for coredns [2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead] ... I0507 22:42:35.716445 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead" I0507 22:42:35.741527 668555 logs.go:123] Gathering logs for kube-scheduler [41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a] ... I0507 22:42:35.741555 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a" I0507 22:42:35.770205 668555 logs.go:123] Gathering logs for kubelet ... I0507 22:42:35.770236 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" W0507 22:42:35.822573 668555 logs.go:138] Found kubelet problem: May 07 22:41:17 bridge-20210507224024-391940 kubelet[1180]: E0507 22:41:17.449069 1180 pod_workers.go:191] Error syncing pod fdcac5df-b94e-462a-87d5-98af36032464 ("coredns-74ff55c5b-wdngz_kube-system(fdcac5df-b94e-462a-87d5-98af36032464)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "cannot find volume \"config-volume\" to mount into container \"coredns\"" I0507 22:42:35.823008 668555 out.go:304] Setting ErrFile to fd 2... I0507 22:42:35.823023 668555 out.go:338] TERM=,COLORTERM=, which probably does not support color W0507 22:42:35.823131 668555 out.go:235] X Problems detected in kubelet: W0507 22:42:35.823143 668555 out.go:424] no arguments passed for " May 07 22:41:17 bridge-20210507224024-391940 kubelet[1180]: E0507 22:41:17.449069 1180 pod_workers.go:191] Error syncing pod fdcac5df-b94e-462a-87d5-98af36032464 (\"coredns-74ff55c5b-wdngz_kube-system(fdcac5df-b94e-462a-87d5-98af36032464)\"), skipping: failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"cannot find volume \\\"config-volume\\\" to mount into container \\\"coredns\\\"\"\n" - returning raw string W0507 22:42:35.823158 668555 out.go:235] May 07 22:41:17 bridge-20210507224024-391940 kubelet[1180]: E0507 22:41:17.449069 1180 pod_workers.go:191] Error syncing pod fdcac5df-b94e-462a-87d5-98af36032464 ("coredns-74ff55c5b-wdngz_kube-system(fdcac5df-b94e-462a-87d5-98af36032464)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "cannot find volume \"config-volume\" to mount into container \"coredns\"" I0507 22:42:35.823166 668555 out.go:304] Setting ErrFile to fd 2... I0507 22:42:35.823171 668555 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:42:38.770070 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:40.771262 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:43.269652 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:45.769197 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:45.824690 668555 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0507 22:42:45.846628 668555 api_server.go:70] duration metric: took 1m28.269918689s to wait for apiserver process to appear ... I0507 22:42:45.846659 668555 api_server.go:86] waiting for apiserver healthz status ... I0507 22:42:45.846689 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]} I0507 22:42:45.846772 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0507 22:42:45.869042 668555 cri.go:76] found id: "6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef" I0507 22:42:45.869067 668555 cri.go:76] found id: "" I0507 22:42:45.869075 668555 logs.go:270] 1 containers: [6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef] I0507 22:42:45.869120 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:45.872040 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]} I0507 22:42:45.872102 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd I0507 22:42:45.893298 668555 cri.go:76] found id: "e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7" I0507 22:42:45.893316 668555 cri.go:76] found id: "" I0507 22:42:45.893322 668555 logs.go:270] 1 containers: [e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7] I0507 22:42:45.893356 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:45.896044 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]} I0507 22:42:45.896103 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns I0507 22:42:45.916750 668555 cri.go:76] found id: "2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead" I0507 22:42:45.916770 668555 cri.go:76] found id: "" I0507 22:42:45.916775 668555 logs.go:270] 1 containers: [2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead] I0507 22:42:45.916812 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:45.919401 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]} I0507 22:42:45.919452 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0507 22:42:45.939392 668555 cri.go:76] found id: "41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a" I0507 22:42:45.939409 668555 cri.go:76] found id: "" I0507 22:42:45.939415 668555 logs.go:270] 1 containers: [41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a] I0507 22:42:45.939454 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:45.942192 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]} I0507 22:42:45.942246 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy I0507 22:42:45.962229 668555 cri.go:76] found id: "bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163" I0507 22:42:45.962248 668555 cri.go:76] found id: "" I0507 22:42:45.962254 668555 logs.go:270] 1 containers: [bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163] I0507 22:42:45.962284 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:45.964904 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]} I0507 22:42:45.964949 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0507 22:42:45.987488 668555 cri.go:76] found id: "" I0507 22:42:45.987539 668555 logs.go:270] 0 containers: [] W0507 22:42:45.987547 668555 logs.go:272] No container was found matching "kubernetes-dashboard" I0507 22:42:45.987555 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]} I0507 22:42:45.987600 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0507 22:42:46.007636 668555 cri.go:76] found id: "ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac" I0507 22:42:46.007652 668555 cri.go:76] found id: "" I0507 22:42:46.007658 668555 logs.go:270] 1 containers: [ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac] I0507 22:42:46.007691 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:46.010278 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]} I0507 22:42:46.010322 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0507 22:42:46.031247 668555 cri.go:76] found id: "61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf" I0507 22:42:46.031268 668555 cri.go:76] found id: "" I0507 22:42:46.031274 668555 logs.go:270] 1 containers: [61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf] I0507 22:42:46.031346 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:46.034072 668555 logs.go:123] Gathering logs for coredns [2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead] ... I0507 22:42:46.034107 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead" I0507 22:42:46.055825 668555 logs.go:123] Gathering logs for kube-proxy [bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163] ... I0507 22:42:46.055847 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163" I0507 22:42:46.077653 668555 logs.go:123] Gathering logs for storage-provisioner [ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac] ... I0507 22:42:46.077677 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac" I0507 22:42:46.099254 668555 logs.go:123] Gathering logs for kube-controller-manager [61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf] ... I0507 22:42:46.099276 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf" I0507 22:42:46.131389 668555 logs.go:123] Gathering logs for container status ... I0507 22:42:46.131414 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0507 22:42:46.155297 668555 logs.go:123] Gathering logs for kubelet ... I0507 22:42:46.155319 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" W0507 22:42:46.210050 668555 logs.go:138] Found kubelet problem: May 07 22:41:17 bridge-20210507224024-391940 kubelet[1180]: E0507 22:41:17.449069 1180 pod_workers.go:191] Error syncing pod fdcac5df-b94e-462a-87d5-98af36032464 ("coredns-74ff55c5b-wdngz_kube-system(fdcac5df-b94e-462a-87d5-98af36032464)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "cannot find volume \"config-volume\" to mount into container \"coredns\"" I0507 22:42:46.210693 668555 logs.go:123] Gathering logs for describe nodes ... I0507 22:42:46.210710 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0507 22:42:46.297248 668555 logs.go:123] Gathering logs for kube-apiserver [6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef] ... I0507 22:42:46.297281 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef" I0507 22:42:46.333028 668555 logs.go:123] Gathering logs for containerd ... I0507 22:42:46.333055 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400" I0507 22:42:46.364655 668555 logs.go:123] Gathering logs for dmesg ... I0507 22:42:46.364684 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0507 22:42:46.385640 668555 logs.go:123] Gathering logs for etcd [e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7] ... I0507 22:42:46.385664 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7" I0507 22:42:46.411628 668555 logs.go:123] Gathering logs for kube-scheduler [41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a] ... I0507 22:42:46.411652 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a" I0507 22:42:46.438625 668555 out.go:304] Setting ErrFile to fd 2... I0507 22:42:46.438647 668555 out.go:338] TERM=,COLORTERM=, which probably does not support color W0507 22:42:46.438765 668555 out.go:235] X Problems detected in kubelet: W0507 22:42:46.438780 668555 out.go:424] no arguments passed for " May 07 22:41:17 bridge-20210507224024-391940 kubelet[1180]: E0507 22:41:17.449069 1180 pod_workers.go:191] Error syncing pod fdcac5df-b94e-462a-87d5-98af36032464 (\"coredns-74ff55c5b-wdngz_kube-system(fdcac5df-b94e-462a-87d5-98af36032464)\"), skipping: failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"cannot find volume \\\"config-volume\\\" to mount into container \\\"coredns\\\"\"\n" - returning raw string W0507 22:42:46.438798 668555 out.go:235] May 07 22:41:17 bridge-20210507224024-391940 kubelet[1180]: E0507 22:41:17.449069 1180 pod_workers.go:191] Error syncing pod fdcac5df-b94e-462a-87d5-98af36032464 ("coredns-74ff55c5b-wdngz_kube-system(fdcac5df-b94e-462a-87d5-98af36032464)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "cannot find volume \"config-volume\" to mount into container \"coredns\"" I0507 22:42:46.438810 668555 out.go:304] Setting ErrFile to fd 2... I0507 22:42:46.438819 668555 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:42:48.270916 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:50.769708 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:53.270043 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:55.769030 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:57.769122 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:42:56.440049 668555 api_server.go:223] Checking apiserver healthz at https://192.168.94.2:8443/healthz ... I0507 22:42:56.445620 668555 api_server.go:249] https://192.168.94.2:8443/healthz returned 200: ok I0507 22:42:56.446505 668555 api_server.go:139] control plane version: v1.20.2 I0507 22:42:56.446528 668555 api_server.go:129] duration metric: took 10.599861577s to wait for apiserver health ... I0507 22:42:56.446537 668555 system_pods.go:43] waiting for kube-system pods to appear ... I0507 22:42:56.446560 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]} I0507 22:42:56.446607 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0507 22:42:56.470123 668555 cri.go:76] found id: "6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef" I0507 22:42:56.470146 668555 cri.go:76] found id: "" I0507 22:42:56.470154 668555 logs.go:270] 1 containers: [6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef] I0507 22:42:56.470204 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:56.473177 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]} I0507 22:42:56.473233 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd I0507 22:42:56.494263 668555 cri.go:76] found id: "e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7" I0507 22:42:56.494283 668555 cri.go:76] found id: "" I0507 22:42:56.494289 668555 logs.go:270] 1 containers: [e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7] I0507 22:42:56.494326 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:56.497102 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]} I0507 22:42:56.497152 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns I0507 22:42:56.519079 668555 cri.go:76] found id: "2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead" I0507 22:42:56.519095 668555 cri.go:76] found id: "" I0507 22:42:56.519100 668555 logs.go:270] 1 containers: [2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead] I0507 22:42:56.519133 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:56.521800 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]} I0507 22:42:56.521860 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0507 22:42:56.542895 668555 cri.go:76] found id: "41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a" I0507 22:42:56.542918 668555 cri.go:76] found id: "" I0507 22:42:56.542925 668555 logs.go:270] 1 containers: [41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a] I0507 22:42:56.542967 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:56.545669 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]} I0507 22:42:56.545725 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy I0507 22:42:56.566786 668555 cri.go:76] found id: "bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163" I0507 22:42:56.566804 668555 cri.go:76] found id: "" I0507 22:42:56.566811 668555 logs.go:270] 1 containers: [bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163] I0507 22:42:56.566852 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:56.569557 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]} I0507 22:42:56.569605 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0507 22:42:56.590459 668555 cri.go:76] found id: "" I0507 22:42:56.590476 668555 logs.go:270] 0 containers: [] W0507 22:42:56.590481 668555 logs.go:272] No container was found matching "kubernetes-dashboard" I0507 22:42:56.590486 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]} I0507 22:42:56.590530 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0507 22:42:56.613112 668555 cri.go:76] found id: "ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac" I0507 22:42:56.613133 668555 cri.go:76] found id: "" I0507 22:42:56.613141 668555 logs.go:270] 1 containers: [ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac] I0507 22:42:56.613189 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:56.615906 668555 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]} I0507 22:42:56.615966 668555 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0507 22:42:56.637316 668555 cri.go:76] found id: "61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf" I0507 22:42:56.637364 668555 cri.go:76] found id: "" I0507 22:42:56.637379 668555 logs.go:270] 1 containers: [61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf] I0507 22:42:56.637445 668555 ssh_runner.go:149] Run: which crictl I0507 22:42:56.640583 668555 logs.go:123] Gathering logs for kubelet ... I0507 22:42:56.640605 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" W0507 22:42:56.693338 668555 logs.go:138] Found kubelet problem: May 07 22:41:17 bridge-20210507224024-391940 kubelet[1180]: E0507 22:41:17.449069 1180 pod_workers.go:191] Error syncing pod fdcac5df-b94e-462a-87d5-98af36032464 ("coredns-74ff55c5b-wdngz_kube-system(fdcac5df-b94e-462a-87d5-98af36032464)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "cannot find volume \"config-volume\" to mount into container \"coredns\"" I0507 22:42:56.693786 668555 logs.go:123] Gathering logs for describe nodes ... I0507 22:42:56.693806 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0507 22:42:56.785105 668555 logs.go:123] Gathering logs for etcd [e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7] ... I0507 22:42:56.785137 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e79370f8582f567393596235a3a9a3791af6f0485f2c319761dc453c92370bf7" I0507 22:42:56.812625 668555 logs.go:123] Gathering logs for kube-controller-manager [61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf] ... I0507 22:42:56.812654 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61a86ee3b485d1806f72b893d255c832f4df1fb362c8e4366b69b05b1f597dbf" I0507 22:42:56.846239 668555 logs.go:123] Gathering logs for container status ... I0507 22:42:56.846273 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0507 22:42:56.872696 668555 logs.go:123] Gathering logs for dmesg ... I0507 22:42:56.872729 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0507 22:42:56.896306 668555 logs.go:123] Gathering logs for kube-apiserver [6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef] ... I0507 22:42:56.896334 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dc0b22be38d452d256a52eaf64a7bb1f2aa8c15966348c52e33bbb791a678ef" I0507 22:42:56.932309 668555 logs.go:123] Gathering logs for coredns [2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead] ... I0507 22:42:56.932339 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ef6e0e8ca031e7958d9949d1e04e1b34e1cc5ba9176088c82ae0b31fd5aeead" I0507 22:42:56.954736 668555 logs.go:123] Gathering logs for kube-scheduler [41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a] ... I0507 22:42:56.954763 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41e1f340170fd28d18e59bc04eaddd5ca3510f0ecfe696e0d75e6a06cd0ad39a" I0507 22:42:56.980162 668555 logs.go:123] Gathering logs for kube-proxy [bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163] ... I0507 22:42:56.980188 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfda804d42b627d6e9da3cc58516747c9eefc23e219ebcf3a374cda58a012163" I0507 22:42:57.001913 668555 logs.go:123] Gathering logs for storage-provisioner [ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac] ... I0507 22:42:57.001936 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccb30cde8c456efd2e7cae759b30baa92719b34f67532486ef8a172631f2c2ac" I0507 22:42:57.024379 668555 logs.go:123] Gathering logs for containerd ... I0507 22:42:57.024411 668555 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400" I0507 22:42:57.054923 668555 out.go:304] Setting ErrFile to fd 2... I0507 22:42:57.054944 668555 out.go:338] TERM=,COLORTERM=, which probably does not support color W0507 22:42:57.055040 668555 out.go:235] X Problems detected in kubelet: W0507 22:42:57.055053 668555 out.go:424] no arguments passed for " May 07 22:41:17 bridge-20210507224024-391940 kubelet[1180]: E0507 22:41:17.449069 1180 pod_workers.go:191] Error syncing pod fdcac5df-b94e-462a-87d5-98af36032464 (\"coredns-74ff55c5b-wdngz_kube-system(fdcac5df-b94e-462a-87d5-98af36032464)\"), skipping: failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"cannot find volume \\\"config-volume\\\" to mount into container \\\"coredns\\\"\"\n" - returning raw string W0507 22:42:57.055069 668555 out.go:235] May 07 22:41:17 bridge-20210507224024-391940 kubelet[1180]: E0507 22:41:17.449069 1180 pod_workers.go:191] Error syncing pod fdcac5df-b94e-462a-87d5-98af36032464 ("coredns-74ff55c5b-wdngz_kube-system(fdcac5df-b94e-462a-87d5-98af36032464)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "cannot find volume \"config-volume\" to mount into container \"coredns\"" I0507 22:42:57.055074 668555 out.go:304] Setting ErrFile to fd 2... I0507 22:42:57.055078 668555 out.go:338] TERM=,COLORTERM=, which probably does not support color I0507 22:42:59.769527 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:02.269666 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:07.059818 668555 system_pods.go:59] 7 kube-system pods found I0507 22:43:07.059851 668555 system_pods.go:61] "coredns-74ff55c5b-kn5r7" [9ddaf16f-4215-42a6-9c1e-6e41c9849ed7] Running I0507 22:43:07.059857 668555 system_pods.go:61] "etcd-bridge-20210507224024-391940" [3c78015b-db5c-4fe5-99b1-0109a5427769] Running I0507 22:43:07.059861 668555 system_pods.go:61] "kube-apiserver-bridge-20210507224024-391940" [5ae80380-0e21-4a97-be4c-5525da123dc4] Running I0507 22:43:07.059865 668555 system_pods.go:61] "kube-controller-manager-bridge-20210507224024-391940" [b564a595-393e-4968-a05d-54f07b816bcc] Running I0507 22:43:07.059869 668555 system_pods.go:61] "kube-proxy-ws99c" [d3170feb-4f47-4975-9f18-54a7340c425c] Running I0507 22:43:07.059873 668555 system_pods.go:61] "kube-scheduler-bridge-20210507224024-391940" [5c9a05aa-1efb-4844-9dec-d9729b234f6e] Running I0507 22:43:07.059876 668555 system_pods.go:61] "storage-provisioner" [2c84fe99-a93a-4e7b-879f-88e8f8fba4ca] Running I0507 22:43:07.059881 668555 system_pods.go:74] duration metric: took 10.613338832s to wait for pod list to return data ... I0507 22:43:07.059893 668555 default_sa.go:34] waiting for default service account to be created ... I0507 22:43:07.062004 668555 default_sa.go:45] found service account: "default" I0507 22:43:07.062023 668555 default_sa.go:55] duration metric: took 2.120934ms for default service account to be created ... I0507 22:43:07.062033 668555 system_pods.go:116] waiting for k8s-apps to be running ... I0507 22:43:07.065589 668555 system_pods.go:86] 7 kube-system pods found I0507 22:43:07.065610 668555 system_pods.go:89] "coredns-74ff55c5b-kn5r7" [9ddaf16f-4215-42a6-9c1e-6e41c9849ed7] Running I0507 22:43:07.065616 668555 system_pods.go:89] "etcd-bridge-20210507224024-391940" [3c78015b-db5c-4fe5-99b1-0109a5427769] Running I0507 22:43:07.065621 668555 system_pods.go:89] "kube-apiserver-bridge-20210507224024-391940" [5ae80380-0e21-4a97-be4c-5525da123dc4] Running I0507 22:43:07.065625 668555 system_pods.go:89] "kube-controller-manager-bridge-20210507224024-391940" [b564a595-393e-4968-a05d-54f07b816bcc] Running I0507 22:43:07.065629 668555 system_pods.go:89] "kube-proxy-ws99c" [d3170feb-4f47-4975-9f18-54a7340c425c] Running I0507 22:43:07.065633 668555 system_pods.go:89] "kube-scheduler-bridge-20210507224024-391940" [5c9a05aa-1efb-4844-9dec-d9729b234f6e] Running I0507 22:43:07.065637 668555 system_pods.go:89] "storage-provisioner" [2c84fe99-a93a-4e7b-879f-88e8f8fba4ca] Running I0507 22:43:07.065643 668555 system_pods.go:126] duration metric: took 3.604652ms to wait for k8s-apps to be running ... I0507 22:43:07.065649 668555 system_svc.go:44] waiting for kubelet service to be running .... I0507 22:43:07.065691 668555 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0507 22:43:07.075415 668555 system_svc.go:56] duration metric: took 9.760919ms WaitForService to wait for kubelet. I0507 22:43:07.075436 668555 kubeadm.go:538] duration metric: took 1m49.498734907s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ... I0507 22:43:07.075454 668555 node_conditions.go:102] verifying NodePressure condition ... I0507 22:43:07.078087 668555 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki I0507 22:43:07.078111 668555 node_conditions.go:123] node cpu capacity is 8 I0507 22:43:07.078125 668555 node_conditions.go:105] duration metric: took 2.66501ms to run NodePressure ... I0507 22:43:07.078136 668555 start.go:206] waiting for startup goroutines ... I0507 22:43:07.122445 668555 start.go:460] kubectl: 1.20.5, cluster: 1.20.2 (minor skew: 0) I0507 22:43:07.124854 668555 out.go:170] * Done! kubectl is now configured to use "bridge-20210507224024-391940" cluster and "default" namespace by default I0507 22:43:04.769578 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:06.769758 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:11.445319 634245 system_pods.go:86] 7 kube-system pods found I0507 22:43:11.445357 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:43:11.445365 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:43:11.445371 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:43:11.445376 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:43:11.445380 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:43:11.445384 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:43:11.445388 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:43:11.445411 634245 retry.go:31] will retry after 1m7.577191067s: missing components: kube-dns I0507 22:43:08.770354 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:10.770498 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:13.271448 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:15.770804 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:18.269214 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:20.269718 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:22.769151 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:24.771659 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:27.269262 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:29.269802 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:31.769488 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:33.769541 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:36.268974 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:38.269261 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:40.270280 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:42.771006 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:45.269345 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:47.768594 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:49.769670 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:52.269433 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:54.769190 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:56.769657 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:43:59.269644 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:01.269772 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:03.769233 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:05.769576 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:08.269493 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:10.769584 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:12.770143 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:15.269008 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:17.269047 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:19.027342 634245 system_pods.go:86] 7 kube-system pods found I0507 22:44:19.027380 634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:44:19.027389 634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running I0507 22:44:19.027395 634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running I0507 22:44:19.027400 634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running I0507 22:44:19.027404 634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running I0507 22:44:19.027408 634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running I0507 22:44:19.027412 634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running I0507 22:44:19.030342 634245 out.go:170] W0507 22:44:19.030464 634245 out.go:235] X Exiting due to GUEST_START: wait 5m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns W0507 22:44:19.030480 634245 out.go:424] no arguments passed for "* \n" - returning raw string W0507 22:44:19.030488 634245 out.go:235] * W0507 22:44:19.030504 634245 out.go:424] no arguments passed for "* If the above advice does not help, please let us know:\n" - returning raw string W0507 22:44:19.030511 634245 out.go:424] no arguments passed for " https://github.com/kubernetes/minikube/issues/new/choose\n\n" - returning raw string W0507 22:44:19.030516 634245 out.go:424] no arguments passed for "* Please attach the following file to the GitHub issue:\n" - returning raw string W0507 22:44:19.030577 634245 out.go:424] no arguments passed for "* If the above advice does not help, please let us know:\n https://github.com/kubernetes/minikube/issues/new/choose\n\n* Please attach the following file to the GitHub issue:\n* - /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/logs/lastStart.txt\n\n" - returning raw string W0507 22:44:19.032358 634245 out.go:235] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ W0507 22:44:19.032373 634245 out.go:235] │ │ W0507 22:44:19.032378 634245 out.go:235] │ * If the above advice does not help, please let us know: │ W0507 22:44:19.032383 634245 out.go:235] │ https://github.com/kubernetes/minikube/issues/new/choose │ W0507 22:44:19.032389 634245 out.go:235] │ │ W0507 22:44:19.032394 634245 out.go:235] │ * Please attach the following file to the GitHub issue: │ W0507 22:44:19.032399 634245 out.go:235] │ * - /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/logs/lastStart.txt │ W0507 22:44:19.032408 634245 out.go:235] │ │ W0507 22:44:19.032412 634245 out.go:235] ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ W0507 22:44:19.032420 634245 out.go:235] I0507 22:44:19.270021 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:21.270385 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:23.768995 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:25.770177 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:28.268810 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:30.269545 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:32.769848 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:35.269721 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:37.768834 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:39.769004 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:41.769947 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:44.269742 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:46.769632 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:49.269169 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:51.270949 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:53.769304 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:55.769541 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:44:58.269162 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:00.269690 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:02.769677 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:04.769826 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:06.774522 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:09.269620 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:11.269885 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:13.770049 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:15.772664 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:18.269043 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:20.769936 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:22.770233 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:25.269440 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:27.288248 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:29.770145 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:32.268998 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:34.269736 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:36.769998 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:39.269073 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:41.269867 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:43.769776 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:46.269226 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:48.769817 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:51.269597 672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False" I0507 22:45:51.773488 672811 pod_ready.go:81] duration metric: took 4m0.016579269s waiting for pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace to be "Ready" ... E0507 22:45:51.773523 672811 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition I0507 22:45:51.773536 672811 pod_ready.go:78] waiting up to 5m0s for pod "etcd-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ... I0507 22:45:51.777341 672811 pod_ready.go:92] pod "etcd-kubenet-20210507224052-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:45:51.777357 672811 pod_ready.go:81] duration metric: took 3.813085ms waiting for pod "etcd-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ... I0507 22:45:51.777371 672811 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ... I0507 22:45:51.780967 672811 pod_ready.go:92] pod "kube-apiserver-kubenet-20210507224052-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:45:51.780982 672811 pod_ready.go:81] duration metric: took 3.604125ms waiting for pod "kube-apiserver-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ... I0507 22:45:51.780991 672811 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ... I0507 22:45:51.784544 672811 pod_ready.go:92] pod "kube-controller-manager-kubenet-20210507224052-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:45:51.784564 672811 pod_ready.go:81] duration metric: took 3.566966ms waiting for pod "kube-controller-manager-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ... I0507 22:45:51.784576 672811 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-52sqc" in "kube-system" namespace to be "Ready" ... I0507 22:45:52.168404 672811 pod_ready.go:92] pod "kube-proxy-52sqc" in "kube-system" namespace has status "Ready":"True" I0507 22:45:52.168426 672811 pod_ready.go:81] duration metric: took 383.841925ms waiting for pod "kube-proxy-52sqc" in "kube-system" namespace to be "Ready" ... I0507 22:45:52.168441 672811 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ... I0507 22:45:52.567262 672811 pod_ready.go:92] pod "kube-scheduler-kubenet-20210507224052-391940" in "kube-system" namespace has status "Ready":"True" I0507 22:45:52.567285 672811 pod_ready.go:81] duration metric: took 398.834268ms waiting for pod "kube-scheduler-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ... I0507 22:45:52.567296 672811 pod_ready.go:38] duration metric: took 4m0.822169579s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ... I0507 22:45:52.567360 672811 api_server.go:50] waiting for apiserver process to appear ... I0507 22:45:52.567436 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]} I0507 22:45:52.567610 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0507 22:45:52.591474 672811 cri.go:76] found id: "b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56" I0507 22:45:52.591498 672811 cri.go:76] found id: "" I0507 22:45:52.591530 672811 logs.go:270] 1 containers: [b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56] I0507 22:45:52.591595 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:52.594489 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]} I0507 22:45:52.594543 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd I0507 22:45:52.615397 672811 cri.go:76] found id: "9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259" I0507 22:45:52.615416 672811 cri.go:76] found id: "" I0507 22:45:52.615422 672811 logs.go:270] 1 containers: [9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259] I0507 22:45:52.615459 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:52.618174 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]} I0507 22:45:52.618232 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns I0507 22:45:52.638869 672811 cri.go:76] found id: "" I0507 22:45:52.638889 672811 logs.go:270] 0 containers: [] W0507 22:45:52.638895 672811 logs.go:272] No container was found matching "coredns" I0507 22:45:52.638901 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]} I0507 22:45:52.638934 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0507 22:45:52.658993 672811 cri.go:76] found id: "148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac" I0507 22:45:52.659013 672811 cri.go:76] found id: "" I0507 22:45:52.659020 672811 logs.go:270] 1 containers: [148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac] I0507 22:45:52.659065 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:52.661726 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]} I0507 22:45:52.661787 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy I0507 22:45:52.682559 672811 cri.go:76] found id: "75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c" I0507 22:45:52.682577 672811 cri.go:76] found id: "" I0507 22:45:52.682582 672811 logs.go:270] 1 containers: [75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c] I0507 22:45:52.682614 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:52.685304 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]} I0507 22:45:52.685349 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0507 22:45:52.705409 672811 cri.go:76] found id: "" I0507 22:45:52.705430 672811 logs.go:270] 0 containers: [] W0507 22:45:52.705437 672811 logs.go:272] No container was found matching "kubernetes-dashboard" I0507 22:45:52.705444 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]} I0507 22:45:52.705490 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0507 22:45:52.725521 672811 cri.go:76] found id: "b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4" I0507 22:45:52.725549 672811 cri.go:76] found id: "" I0507 22:45:52.725557 672811 logs.go:270] 1 containers: [b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4] I0507 22:45:52.725594 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:52.728131 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]} I0507 22:45:52.728184 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0507 22:45:52.748039 672811 cri.go:76] found id: "fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9" I0507 22:45:52.748057 672811 cri.go:76] found id: "" I0507 22:45:52.748062 672811 logs.go:270] 1 containers: [fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9] I0507 22:45:52.748097 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:52.750638 672811 logs.go:123] Gathering logs for containerd ... I0507 22:45:52.750654 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400" I0507 22:45:52.786622 672811 logs.go:123] Gathering logs for kube-apiserver [b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56] ... I0507 22:45:52.786647 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56" I0507 22:45:52.824066 672811 logs.go:123] Gathering logs for etcd [9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259] ... I0507 22:45:52.824090 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259" I0507 22:45:52.848548 672811 logs.go:123] Gathering logs for storage-provisioner [b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4] ... I0507 22:45:52.848571 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4" I0507 22:45:52.869909 672811 logs.go:123] Gathering logs for kube-scheduler [148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac] ... I0507 22:45:52.869930 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac" I0507 22:45:52.894365 672811 logs.go:123] Gathering logs for kube-proxy [75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c] ... I0507 22:45:52.894389 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c" I0507 22:45:52.915404 672811 logs.go:123] Gathering logs for kube-controller-manager [fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9] ... I0507 22:45:52.915425 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9" I0507 22:45:52.952429 672811 logs.go:123] Gathering logs for container status ... I0507 22:45:52.952458 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0507 22:45:52.976316 672811 logs.go:123] Gathering logs for kubelet ... I0507 22:45:52.976343 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0507 22:45:53.036653 672811 logs.go:123] Gathering logs for dmesg ... I0507 22:45:53.036690 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0507 22:45:53.057944 672811 logs.go:123] Gathering logs for describe nodes ... I0507 22:45:53.057967 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0507 22:45:55.641264 672811 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0507 22:45:55.660478 672811 api_server.go:70] duration metric: took 4m3.942074808s to wait for apiserver process to appear ... I0507 22:45:55.660507 672811 api_server.go:86] waiting for apiserver healthz status ... I0507 22:45:55.660536 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]} I0507 22:45:55.660583 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0507 22:45:55.681645 672811 cri.go:76] found id: "b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56" I0507 22:45:55.681673 672811 cri.go:76] found id: "" I0507 22:45:55.681680 672811 logs.go:270] 1 containers: [b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56] I0507 22:45:55.681720 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:55.684913 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]} I0507 22:45:55.684970 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd I0507 22:45:55.705493 672811 cri.go:76] found id: "9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259" I0507 22:45:55.705512 672811 cri.go:76] found id: "" I0507 22:45:55.705520 672811 logs.go:270] 1 containers: [9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259] I0507 22:45:55.705566 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:55.708189 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]} I0507 22:45:55.708243 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns I0507 22:45:55.728489 672811 cri.go:76] found id: "" I0507 22:45:55.728507 672811 logs.go:270] 0 containers: [] W0507 22:45:55.728513 672811 logs.go:272] No container was found matching "coredns" I0507 22:45:55.728520 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]} I0507 22:45:55.728577 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0507 22:45:55.748870 672811 cri.go:76] found id: "148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac" I0507 22:45:55.748891 672811 cri.go:76] found id: "" I0507 22:45:55.748897 672811 logs.go:270] 1 containers: [148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac] I0507 22:45:55.748931 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:55.751528 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]} I0507 22:45:55.751588 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy I0507 22:45:55.771423 672811 cri.go:76] found id: "75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c" I0507 22:45:55.771447 672811 cri.go:76] found id: "" I0507 22:45:55.771454 672811 logs.go:270] 1 containers: [75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c] I0507 22:45:55.771493 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:55.774059 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]} I0507 22:45:55.774100 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0507 22:45:55.793936 672811 cri.go:76] found id: "" I0507 22:45:55.793955 672811 logs.go:270] 0 containers: [] W0507 22:45:55.793962 672811 logs.go:272] No container was found matching "kubernetes-dashboard" I0507 22:45:55.793968 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]} I0507 22:45:55.794010 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0507 22:45:55.814066 672811 cri.go:76] found id: "b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4" I0507 22:45:55.814087 672811 cri.go:76] found id: "" I0507 22:45:55.814094 672811 logs.go:270] 1 containers: [b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4] I0507 22:45:55.814132 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:55.816677 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]} I0507 22:45:55.816729 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0507 22:45:55.836707 672811 cri.go:76] found id: "fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9" I0507 22:45:55.836735 672811 cri.go:76] found id: "" I0507 22:45:55.836743 672811 logs.go:270] 1 containers: [fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9] I0507 22:45:55.836785 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:55.839333 672811 logs.go:123] Gathering logs for describe nodes ... I0507 22:45:55.839356 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0507 22:45:55.924686 672811 logs.go:123] Gathering logs for kube-apiserver [b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56] ... I0507 22:45:55.924720 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56" I0507 22:45:55.962145 672811 logs.go:123] Gathering logs for containerd ... I0507 22:45:55.962173 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400" I0507 22:45:56.001877 672811 logs.go:123] Gathering logs for kubelet ... I0507 22:45:56.001906 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0507 22:45:56.066223 672811 logs.go:123] Gathering logs for dmesg ... I0507 22:45:56.066252 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0507 22:45:56.087631 672811 logs.go:123] Gathering logs for etcd [9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259] ... I0507 22:45:56.087656 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259" I0507 22:45:56.113575 672811 logs.go:123] Gathering logs for kube-scheduler [148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac] ... I0507 22:45:56.113600 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac" I0507 22:45:56.139177 672811 logs.go:123] Gathering logs for kube-proxy [75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c] ... I0507 22:45:56.139205 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c" I0507 22:45:56.160446 672811 logs.go:123] Gathering logs for storage-provisioner [b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4] ... I0507 22:45:56.160467 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4" I0507 22:45:56.181281 672811 logs.go:123] Gathering logs for kube-controller-manager [fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9] ... I0507 22:45:56.181304 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9" I0507 22:45:56.215392 672811 logs.go:123] Gathering logs for container status ... I0507 22:45:56.215415 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0507 22:45:58.739121 672811 api_server.go:223] Checking apiserver healthz at https://192.168.58.2:8443/healthz ... I0507 22:45:58.747973 672811 api_server.go:249] https://192.168.58.2:8443/healthz returned 200: ok I0507 22:45:58.748915 672811 api_server.go:139] control plane version: v1.20.2 I0507 22:45:58.748937 672811 api_server.go:129] duration metric: took 3.088423463s to wait for apiserver health ... I0507 22:45:58.748946 672811 system_pods.go:43] waiting for kube-system pods to appear ... I0507 22:45:58.748968 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]} I0507 22:45:58.749014 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0507 22:45:58.772016 672811 cri.go:76] found id: "b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56" I0507 22:45:58.772034 672811 cri.go:76] found id: "" I0507 22:45:58.772041 672811 logs.go:270] 1 containers: [b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56] I0507 22:45:58.772081 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:58.774963 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]} I0507 22:45:58.775021 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd I0507 22:45:58.796011 672811 cri.go:76] found id: "9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259" I0507 22:45:58.796030 672811 cri.go:76] found id: "" I0507 22:45:58.796038 672811 logs.go:270] 1 containers: [9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259] I0507 22:45:58.796077 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:58.798611 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]} I0507 22:45:58.798654 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns I0507 22:45:58.819119 672811 cri.go:76] found id: "" I0507 22:45:58.819141 672811 logs.go:270] 0 containers: [] W0507 22:45:58.819148 672811 logs.go:272] No container was found matching "coredns" I0507 22:45:58.819155 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]} I0507 22:45:58.819199 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0507 22:45:58.838941 672811 cri.go:76] found id: "148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac" I0507 22:45:58.838959 672811 cri.go:76] found id: "" I0507 22:45:58.838964 672811 logs.go:270] 1 containers: [148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac] I0507 22:45:58.839011 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:58.841577 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]} I0507 22:45:58.841630 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy I0507 22:45:58.862008 672811 cri.go:76] found id: "75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c" I0507 22:45:58.862038 672811 cri.go:76] found id: "" I0507 22:45:58.862046 672811 logs.go:270] 1 containers: [75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c] I0507 22:45:58.862086 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:58.864678 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]} I0507 22:45:58.864729 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard I0507 22:45:58.884659 672811 cri.go:76] found id: "" I0507 22:45:58.884673 672811 logs.go:270] 0 containers: [] W0507 22:45:58.884678 672811 logs.go:272] No container was found matching "kubernetes-dashboard" I0507 22:45:58.884685 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]} I0507 22:45:58.884728 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner I0507 22:45:58.904618 672811 cri.go:76] found id: "b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4" I0507 22:45:58.904641 672811 cri.go:76] found id: "" I0507 22:45:58.904648 672811 logs.go:270] 1 containers: [b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4] I0507 22:45:58.904679 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:58.907292 672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]} I0507 22:45:58.907336 672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0507 22:45:58.927242 672811 cri.go:76] found id: "fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9" I0507 22:45:58.927257 672811 cri.go:76] found id: "" I0507 22:45:58.927262 672811 logs.go:270] 1 containers: [fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9] I0507 22:45:58.927292 672811 ssh_runner.go:149] Run: which crictl I0507 22:45:58.929833 672811 logs.go:123] Gathering logs for kube-scheduler [148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac] ... I0507 22:45:58.929851 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac" I0507 22:45:58.952999 672811 logs.go:123] Gathering logs for kube-proxy [75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c] ... I0507 22:45:58.953020 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c" I0507 22:45:58.974611 672811 logs.go:123] Gathering logs for container status ... I0507 22:45:58.974637 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0507 22:45:58.997315 672811 logs.go:123] Gathering logs for kubelet ... I0507 22:45:58.997340 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0507 22:45:59.057902 672811 logs.go:123] Gathering logs for dmesg ... I0507 22:45:59.057927 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0507 22:45:59.079247 672811 logs.go:123] Gathering logs for describe nodes ... I0507 22:45:59.079269 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0507 22:45:59.161719 672811 logs.go:123] Gathering logs for kube-apiserver [b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56] ... I0507 22:45:59.161753 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56" I0507 22:45:59.199262 672811 logs.go:123] Gathering logs for etcd [9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259] ... I0507 22:45:59.199288 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259" I0507 22:45:59.224956 672811 logs.go:123] Gathering logs for storage-provisioner [b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4] ... I0507 22:45:59.224982 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4" I0507 22:45:59.246819 672811 logs.go:123] Gathering logs for kube-controller-manager [fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9] ... I0507 22:45:59.246842 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9" I0507 22:45:59.283861 672811 logs.go:123] Gathering logs for containerd ... I0507 22:45:59.283890 672811 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400" I0507 22:46:01.824624 672811 system_pods.go:59] 7 kube-system pods found I0507 22:46:01.824672 672811 system_pods.go:61] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:01.824678 672811 system_pods.go:61] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:01.824684 672811 system_pods.go:61] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:01.824689 672811 system_pods.go:61] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:01.824695 672811 system_pods.go:61] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:01.824699 672811 system_pods.go:61] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:01.824704 672811 system_pods.go:61] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:01.824709 672811 system_pods.go:74] duration metric: took 3.075758667s to wait for pod list to return data ... I0507 22:46:01.824722 672811 default_sa.go:34] waiting for default service account to be created ... I0507 22:46:01.826962 672811 default_sa.go:45] found service account: "default" I0507 22:46:01.826987 672811 default_sa.go:55] duration metric: took 2.259407ms for default service account to be created ... I0507 22:46:01.826995 672811 system_pods.go:116] waiting for k8s-apps to be running ... I0507 22:46:01.830985 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:01.831020 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:01.831030 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:01.831039 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:01.831047 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:01.831074 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:01.831081 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:01.831086 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:01.831099 672811 retry.go:31] will retry after 305.063636ms: missing components: kube-dns I0507 22:46:02.140549 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:02.140579 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:02.140585 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:02.140593 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:02.140600 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:02.140608 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:02.140614 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:02.140621 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:02.140634 672811 retry.go:31] will retry after 338.212508ms: missing components: kube-dns I0507 22:46:02.483304 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:02.483338 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:02.483345 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:02.483351 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:02.483355 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:02.483359 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:02.483364 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:02.483367 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:02.483378 672811 retry.go:31] will retry after 378.459802ms: missing components: kube-dns I0507 22:46:02.867187 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:02.867218 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:02.867226 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:02.867234 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:02.867241 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:02.867250 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:02.867258 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:02.867264 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:02.867277 672811 retry.go:31] will retry after 469.882201ms: missing components: kube-dns I0507 22:46:03.341758 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:03.341789 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:03.341795 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:03.341801 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:03.341806 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:03.341810 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:03.341814 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:03.341817 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:03.341828 672811 retry.go:31] will retry after 667.365439ms: missing components: kube-dns I0507 22:46:04.013373 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:04.013405 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:04.013411 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:04.013417 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:04.013422 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:04.013425 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:04.013430 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:04.013433 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:04.013443 672811 retry.go:31] will retry after 597.243124ms: missing components: kube-dns I0507 22:46:04.615326 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:04.615358 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:04.615366 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:04.615375 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:04.615386 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:04.615398 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:04.615403 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:04.615410 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:04.615422 672811 retry.go:31] will retry after 789.889932ms: missing components: kube-dns I0507 22:46:05.411070 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:05.411103 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:05.411109 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:05.411115 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:05.411120 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:05.411124 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:05.411128 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:05.411134 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:05.411145 672811 retry.go:31] will retry after 951.868007ms: missing components: kube-dns I0507 22:46:06.367954 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:06.367985 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:06.367994 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:06.368003 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:06.368008 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:06.368012 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:06.368016 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:06.368022 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:06.368033 672811 retry.go:31] will retry after 1.341783893s: missing components: kube-dns I0507 22:46:07.715243 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:07.715278 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:07.715284 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:07.715290 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:07.715294 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:07.715299 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:07.715303 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:07.715307 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:07.715318 672811 retry.go:31] will retry after 1.876813009s: missing components: kube-dns I0507 22:46:09.596846 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:09.596877 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:09.596883 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:09.596889 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:09.596894 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:09.596898 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:09.596902 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:09.596908 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:09.596919 672811 retry.go:31] will retry after 2.6934314s: missing components: kube-dns I0507 22:46:12.295432 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:12.295467 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:12.295473 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:12.295479 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:12.295484 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:12.295488 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:12.295492 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:12.295496 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:12.295535 672811 retry.go:31] will retry after 2.494582248s: missing components: kube-dns I0507 22:46:14.802279 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:14.802312 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:14.802319 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:14.802328 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:14.802332 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:14.802338 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:14.802347 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:14.802351 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:14.802365 672811 retry.go:31] will retry after 3.420895489s: missing components: kube-dns I0507 22:46:18.228571 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:18.228606 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:18.228614 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:18.228620 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:18.228625 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:18.228629 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:18.228634 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:18.228641 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:18.228690 672811 retry.go:31] will retry after 4.133785681s: missing components: kube-dns I0507 22:46:22.368039 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:22.368077 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:22.368083 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:22.368090 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:22.368094 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:22.368099 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:22.368104 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:22.368110 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:22.368123 672811 retry.go:31] will retry after 5.595921491s: missing components: kube-dns I0507 22:46:27.968419 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:27.968457 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:27.968468 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:27.968478 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:27.968485 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:27.968491 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:27.968500 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:27.968506 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:27.968522 672811 retry.go:31] will retry after 6.3346098s: missing components: kube-dns I0507 22:46:34.308467 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:34.308500 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:34.308506 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:34.308513 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:34.308517 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:34.308521 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:34.308525 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:34.308529 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:34.308550 672811 retry.go:31] will retry after 7.962971847s: missing components: kube-dns I0507 22:46:42.276615 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:42.276650 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:42.276658 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:42.276674 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:42.276682 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:42.276692 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:42.276702 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:42.276711 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:42.276728 672811 retry.go:31] will retry after 12.096349863s: missing components: kube-dns I0507 22:46:54.377899 672811 system_pods.go:86] 7 kube-system pods found I0507 22:46:54.377933 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:46:54.377939 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:46:54.377945 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:46:54.377950 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:46:54.377954 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:46:54.377959 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:46:54.377962 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:46:54.377976 672811 retry.go:31] will retry after 11.924857264s: missing components: kube-dns I0507 22:47:06.308089 672811 system_pods.go:86] 7 kube-system pods found I0507 22:47:06.308137 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:47:06.308147 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:47:06.308156 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:47:06.308169 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:47:06.308181 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:47:06.308189 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:47:06.308195 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:47:06.308215 672811 retry.go:31] will retry after 14.772791249s: missing components: kube-dns I0507 22:47:21.085968 672811 system_pods.go:86] 7 kube-system pods found I0507 22:47:21.086010 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:47:21.086021 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:47:21.086030 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:47:21.086040 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:47:21.086054 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:47:21.086061 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:47:21.086068 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:47:21.086093 672811 retry.go:31] will retry after 20.175608267s: missing components: kube-dns I0507 22:47:41.266530 672811 system_pods.go:86] 7 kube-system pods found I0507 22:47:41.266567 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:47:41.266575 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:47:41.266583 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:47:41.266587 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:47:41.266592 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:47:41.266596 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:47:41.266600 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:47:41.266611 672811 retry.go:31] will retry after 28.062855718s: missing components: kube-dns I0507 22:48:09.334307 672811 system_pods.go:86] 7 kube-system pods found I0507 22:48:09.334345 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:48:09.334354 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:48:09.334362 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:48:09.334369 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:48:09.334378 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:48:09.334385 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:48:09.334392 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:48:09.334407 672811 retry.go:31] will retry after 40.022161579s: missing components: kube-dns I0507 22:48:49.361787 672811 system_pods.go:86] 7 kube-system pods found I0507 22:48:49.361828 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:48:49.361835 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:48:49.361841 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:48:49.361846 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:48:49.361849 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:48:49.361856 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:48:49.361860 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:48:49.361874 672811 retry.go:31] will retry after 37.970670965s: missing components: kube-dns I0507 22:49:27.337225 672811 system_pods.go:86] 7 kube-system pods found I0507 22:49:27.337262 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:49:27.337269 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:49:27.337276 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:49:27.337280 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:49:27.337284 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:49:27.337289 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:49:27.337292 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:49:27.337304 672811 retry.go:31] will retry after 47.568379235s: missing components: kube-dns I0507 22:50:14.911358 672811 system_pods.go:86] 7 kube-system pods found I0507 22:50:14.911396 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:50:14.911404 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:50:14.911411 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:50:14.911415 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:50:14.911419 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:50:14.911423 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:50:14.911428 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:50:14.911439 672811 retry.go:31] will retry after 1m7.577191067s: missing components: kube-dns I0507 22:51:22.494081 672811 system_pods.go:86] 7 kube-system pods found I0507 22:51:22.494122 672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0507 22:51:22.494130 672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running I0507 22:51:22.494136 672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running I0507 22:51:22.494141 672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running I0507 22:51:22.494144 672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running I0507 22:51:22.494148 672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running I0507 22:51:22.494153 672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running I0507 22:51:22.496731 672811 out.go:170] W0507 22:51:22.496964 672811 out.go:235] X Exiting due to GUEST_START: wait 5m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns W0507 22:51:22.496978 672811 out.go:424] no arguments passed for "* \n" - returning raw string W0507 22:51:22.496984 672811 out.go:235] * W0507 22:51:22.496995 672811 out.go:424] no arguments passed for "* If the above advice does not help, please let us know:\n" - returning raw string W0507 22:51:22.497001 672811 out.go:424] no arguments passed for " https://github.com/kubernetes/minikube/issues/new/choose\n\n" - returning raw string W0507 22:51:22.497005 672811 out.go:424] no arguments passed for "* Please attach the following file to the GitHub issue:\n" - returning raw string W0507 22:51:22.497050 672811 out.go:424] no arguments passed for "* If the above advice does not help, please let us know:\n https://github.com/kubernetes/minikube/issues/new/choose\n\n* Please attach the following file to the GitHub issue:\n* - /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/logs/lastStart.txt\n\n" - returning raw string W0507 22:51:22.498864 672811 out.go:235] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ W0507 22:51:22.498879 672811 out.go:235] │ │ W0507 22:51:22.498885 672811 out.go:235] │ * If the above advice does not help, please let us know: │ W0507 22:51:22.498891 672811 out.go:235] │ https://github.com/kubernetes/minikube/issues/new/choose │ W0507 22:51:22.498898 672811 out.go:235] │ │ W0507 22:51:22.498906 672811 out.go:235] │ * Please attach the following file to the GitHub issue: │ W0507 22:51:22.498917 672811 out.go:235] │ * - /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/logs/lastStart.txt │ W0507 22:51:22.498930 672811 out.go:235] │ │ W0507 22:51:22.498941 672811 out.go:235] ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ W0507 22:51:22.498954 672811 out.go:235] * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID b31ea1c27ce69 6e38f40d628db 9 minutes ago Running storage-provisioner 0 21f631401195f 75176ef021882 43154ddb57a83 9 minutes ago Running kube-proxy 0 955faae48addb b0c96757d2c1a a8c2fdb8bf76e 10 minutes ago Running kube-apiserver 0 7d8dc874178ba 9c34570c0c050 0369cf4303ffd 10 minutes ago Running etcd 0 bd1aed6e39609 148633a394b37 ed2c44fbdd78b 10 minutes ago Running kube-scheduler 0 3e8abe0cb8016 fd80079cc01a8 a27166429d98e 10 minutes ago Running kube-controller-manager 0 b8deadf30c448 * * ==> containerd <== * -- Logs begin at Fri 2021-05-07 22:40:55 UTC, end at Fri 2021-05-07 22:51:23 UTC. -- May 07 22:48:14 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:48:14.807614699Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-g7c7z,Uid:600662d5-0810-4cc4-9c1c-948a98a998f7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"919ad719002edc3413265e5b416c92811cc8cc4de561397d1411fc90a6a77b15\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:48:28 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:48:28.600614866Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-g7c7z,Uid:600662d5-0810-4cc4-9c1c-948a98a998f7,Namespace:kube-system,Attempt:0,}" May 07 22:48:38 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:48:38.786897997Z" level=error msg="Failed to destroy network for sandbox \"38ef0b8bda1ab3770996e176040219b1bbbb4130ba82c14862d1b7d63ef3ba41\"" error="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.19 -j CNI-b21b048cecdf0280b481caec -m comment --comment name: \"crio\" id: \"38ef0b8bda1ab3770996e176040219b1bbbb4130ba82c14862d1b7d63ef3ba41\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-b21b048cecdf0280b481caec':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" May 07 22:48:38 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:48:38.807586644Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-g7c7z,Uid:600662d5-0810-4cc4-9c1c-948a98a998f7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"38ef0b8bda1ab3770996e176040219b1bbbb4130ba82c14862d1b7d63ef3ba41\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:48:49 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:48:49.600501922Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-g7c7z,Uid:600662d5-0810-4cc4-9c1c-948a98a998f7,Namespace:kube-system,Attempt:0,}" May 07 22:48:59 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:48:59.774801432Z" level=error msg="Failed to destroy network for sandbox \"af405d726c9e0d35fb2b000f1aaec5622c388c34676e221d5e97b7189291ebcf\"" error="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.20 -j CNI-228f5d83d875458632177d12 -m comment --comment name: \"crio\" id: \"af405d726c9e0d35fb2b000f1aaec5622c388c34676e221d5e97b7189291ebcf\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-228f5d83d875458632177d12':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" May 07 22:48:59 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:48:59.791606813Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-g7c7z,Uid:600662d5-0810-4cc4-9c1c-948a98a998f7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"af405d726c9e0d35fb2b000f1aaec5622c388c34676e221d5e97b7189291ebcf\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:49:13 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:49:13.600656267Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-g7c7z,Uid:600662d5-0810-4cc4-9c1c-948a98a998f7,Namespace:kube-system,Attempt:0,}" May 07 22:49:23 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:49:23.794606087Z" level=error msg="Failed to destroy network for sandbox \"6810a2c0f0471bd72f9663395d1c014097bbeb2351c9b94c1f6b6eaf7d38baa5\"" error="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.21 -j CNI-4f1136bbf8efc350ca90e1a8 -m comment --comment name: \"crio\" id: \"6810a2c0f0471bd72f9663395d1c014097bbeb2351c9b94c1f6b6eaf7d38baa5\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4f1136bbf8efc350ca90e1a8':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" May 07 22:49:23 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:49:23.811597886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-g7c7z,Uid:600662d5-0810-4cc4-9c1c-948a98a998f7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6810a2c0f0471bd72f9663395d1c014097bbeb2351c9b94c1f6b6eaf7d38baa5\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:49:37 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:49:37.600638651Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-g7c7z,Uid:600662d5-0810-4cc4-9c1c-948a98a998f7,Namespace:kube-system,Attempt:0,}" May 07 22:49:47 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:49:47.786572124Z" level=error msg="Failed to destroy network for sandbox \"9c6487a6ec72bf0980abc97b800237ae8322e4fd76e2c1fc66582ec5e98f5940\"" error="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.22 -j CNI-240a42e96b2e251de0ba0d2f -m comment --comment name: \"crio\" id: \"9c6487a6ec72bf0980abc97b800237ae8322e4fd76e2c1fc66582ec5e98f5940\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-240a42e96b2e251de0ba0d2f':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" May 07 22:49:47 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:49:47.811617528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-g7c7z,Uid:600662d5-0810-4cc4-9c1c-948a98a998f7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9c6487a6ec72bf0980abc97b800237ae8322e4fd76e2c1fc66582ec5e98f5940\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:50:00 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:50:00.600604566Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-g7c7z,Uid:600662d5-0810-4cc4-9c1c-948a98a998f7,Namespace:kube-system,Attempt:0,}" May 07 22:50:10 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:50:10.790676686Z" level=error msg="Failed to destroy network for sandbox \"dbd9f6bc3b3b0d86982392f4ff861c1ad03ff1c9a1e0b8a8128d18a7be782bd3\"" error="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.23 -j CNI-e630eb41ace6c753a65c7c68 -m comment --comment name: \"crio\" id: \"dbd9f6bc3b3b0d86982392f4ff861c1ad03ff1c9a1e0b8a8128d18a7be782bd3\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-e630eb41ace6c753a65c7c68':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" May 07 22:50:10 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:50:10.807597474Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-g7c7z,Uid:600662d5-0810-4cc4-9c1c-948a98a998f7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dbd9f6bc3b3b0d86982392f4ff861c1ad03ff1c9a1e0b8a8128d18a7be782bd3\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:50:22 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:50:22.600903266Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-g7c7z,Uid:600662d5-0810-4cc4-9c1c-948a98a998f7,Namespace:kube-system,Attempt:0,}" May 07 22:50:32 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:50:32.770712575Z" level=error msg="Failed to destroy network for sandbox \"570f921f9af1846f6fab5b097c730a2a4dcb02edfdab2c9028a3a1012564ce37\"" error="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.24 -j CNI-13441bc2ed867167e49ea410 -m comment --comment name: \"crio\" id: \"570f921f9af1846f6fab5b097c730a2a4dcb02edfdab2c9028a3a1012564ce37\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-13441bc2ed867167e49ea410':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" May 07 22:50:32 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:50:32.803599750Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-g7c7z,Uid:600662d5-0810-4cc4-9c1c-948a98a998f7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"570f921f9af1846f6fab5b097c730a2a4dcb02edfdab2c9028a3a1012564ce37\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:50:43 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:50:43.600580981Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-g7c7z,Uid:600662d5-0810-4cc4-9c1c-948a98a998f7,Namespace:kube-system,Attempt:0,}" May 07 22:50:53 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:50:53.786458699Z" level=error msg="Failed to destroy network for sandbox \"e2b8fbd8caf3a62df4af9cf0be123f3930a4b899e63a02799a4a6343ee338a09\"" error="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.25 -j CNI-748c82f2cc254fee7309f06f -m comment --comment name: \"crio\" id: \"e2b8fbd8caf3a62df4af9cf0be123f3930a4b899e63a02799a4a6343ee338a09\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-748c82f2cc254fee7309f06f':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" May 07 22:50:53 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:50:53.811589493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-g7c7z,Uid:600662d5-0810-4cc4-9c1c-948a98a998f7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e2b8fbd8caf3a62df4af9cf0be123f3930a4b899e63a02799a4a6343ee338a09\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:51:07 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:51:07.600536581Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-g7c7z,Uid:600662d5-0810-4cc4-9c1c-948a98a998f7,Namespace:kube-system,Attempt:0,}" May 07 22:51:17 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:51:17.774907944Z" level=error msg="Failed to destroy network for sandbox \"69d67907d1012aa7c758c9e7d57c2bfb2ebc8aa5f335d40e74024fdb511511f7\"" error="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.26 -j CNI-ce1c868849a1c7e27a802a8d -m comment --comment name: \"crio\" id: \"69d67907d1012aa7c758c9e7d57c2bfb2ebc8aa5f335d40e74024fdb511511f7\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-ce1c868849a1c7e27a802a8d':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" May 07 22:51:17 kubenet-20210507224052-391940 containerd[456]: time="2021-05-07T22:51:17.795610786Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-74ff55c5b-g7c7z,Uid:600662d5-0810-4cc4-9c1c-948a98a998f7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"69d67907d1012aa7c758c9e7d57c2bfb2ebc8aa5f335d40e74024fdb511511f7\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" * * ==> describe nodes <== * Name: kubenet-20210507224052-391940 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=kubenet-20210507224052-391940 kubernetes.io/os=linux minikube.k8s.io/commit=c31bd57f93d45726e4bd30607374f8c720e70e95 minikube.k8s.io/name=kubenet-20210507224052-391940 minikube.k8s.io/updated_at=2021_05_07T22_41_29_0700 minikube.k8s.io/version=v1.20.0 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Fri, 07 May 2021 22:41:26 +0000 Taints: Unschedulable: false Lease: HolderIdentity: kubenet-20210507224052-391940 AcquireTime: RenewTime: Fri, 07 May 2021 22:51:18 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Fri, 07 May 2021 22:47:07 +0000 Fri, 07 May 2021 22:41:23 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Fri, 07 May 2021 22:47:07 +0000 Fri, 07 May 2021 22:41:23 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Fri, 07 May 2021 22:47:07 +0000 Fri, 07 May 2021 22:41:23 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Fri, 07 May 2021 22:47:07 +0000 Fri, 07 May 2021 22:41:36 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.58.2 Hostname: kubenet-20210507224052-391940 Capacity: cpu: 8 ephemeral-storage: 309568300Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32951376Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 309568300Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32951376Ki pods: 110 System Info: Machine ID: 822f5ed6656e44929f6c2cc5d6881453 System UUID: e59955e2-4225-4a08-9919-1014fc5bc2f9 Boot ID: a4d5e757-68dd-498f-8a27-b6d8b368f45c Kernel Version: 4.9.0-15-amd64 OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: containerd://1.4.4 Kubelet Version: v1.20.2 Kube-Proxy Version: v1.20.2 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (7 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-74ff55c5b-g7c7z 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 9m33s kube-system etcd-kubenet-20210507224052-391940 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 9m47s kube-system kube-apiserver-kubenet-20210507224052-391940 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m47s kube-system kube-controller-manager-kubenet-20210507224052-391940 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m47s kube-system kube-proxy-52sqc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m32s kube-system kube-scheduler-kubenet-20210507224052-391940 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m47s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m31s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) ephemeral-storage 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientMemory 10m (x5 over 10m) kubelet Node kubenet-20210507224052-391940 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 10m (x4 over 10m) kubelet Node kubenet-20210507224052-391940 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 10m (x4 over 10m) kubelet Node kubenet-20210507224052-391940 status is now: NodeHasSufficientPID Normal Starting 9m49s kubelet Starting kubelet. Normal NodeHasSufficientMemory 9m49s kubelet Node kubenet-20210507224052-391940 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 9m49s kubelet Node kubenet-20210507224052-391940 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 9m49s kubelet Node kubenet-20210507224052-391940 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 9m49s kubelet Updated Node Allocatable limit across pods Normal NodeReady 9m47s kubelet Node kubenet-20210507224052-391940 status is now: NodeReady Normal Starting 9m32s kube-proxy Starting kube-proxy. * * ==> dmesg <== * [ +0.000002] ll header: 00000000: ff ff ff ff ff ff 36 4e d5 4a 8f 5e 08 06 ......6N.J.^.. [May 7 22:47] IPv4: martian source 10.85.0.15 from 10.85.0.15, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff b6 57 00 89 11 95 08 06 .......W...... [ +22.003488] IPv4: martian source 10.85.0.16 from 10.85.0.16, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff 0a be be 2d d8 79 08 06 .........-.y.. [ +25.000314] IPv4: martian source 10.85.0.17 from 10.85.0.17, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff e2 23 4e 73 88 53 08 06 .......#Ns.S.. [May 7 22:48] IPv4: martian source 10.85.0.18 from 10.85.0.18, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff 72 77 d1 80 e8 ee 08 06 ......rw...... [ +23.994732] IPv4: martian source 10.85.0.19 from 10.85.0.19, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff 4e 50 e3 9d 51 b8 08 06 ......NP..Q... [ +21.002955] IPv4: martian source 10.85.0.20 from 10.85.0.20, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff ea 40 50 83 4d aa 08 06 .......@P.M... [May 7 22:49] IPv4: martian source 10.85.0.21 from 10.85.0.21, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff 9e 4c 8e 97 64 48 08 06 .......L..dH.. [ +24.007412] IPv4: martian source 10.85.0.22 from 10.85.0.22, on dev eth0 [ +0.000003] ll header: 00000000: ff ff ff ff ff ff 52 59 a7 e4 4d 81 08 06 ......RY..M... [May 7 22:50] IPv4: martian source 10.85.0.23 from 10.85.0.23, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff 8a 7c 5f ea 79 8b 08 06 .......|_.y... [ +21.970759] IPv4: martian source 10.85.0.24 from 10.85.0.24, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff 3a ef 41 a5 0c c8 08 06 ......:.A..... [ +20.998457] IPv4: martian source 10.85.0.25 from 10.85.0.25, on dev eth0 [ +0.000003] ll header: 00000000: ff ff ff ff ff ff 6e e0 1c b7 b5 62 08 06 ......n....b.. [May 7 22:51] IPv4: martian source 10.85.0.26 from 10.85.0.26, on dev eth0 [ +0.000002] ll header: 00000000: ff ff ff ff ff ff 32 18 53 75 5e 1d 08 06 ......2.Su^... * * ==> etcd [9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259] <== * 2021-05-07 22:47:40.405085 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:47:50.405134 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:48:00.405031 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:48:10.405081 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:48:20.405065 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:48:30.405050 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:48:40.405065 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:48:50.405102 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:49:00.405043 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:49:10.405053 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:49:20.405067 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:49:30.405135 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:49:40.405062 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:49:50.405118 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:50:00.405069 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:50:10.405109 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:50:20.405093 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:50:30.405109 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:50:40.405059 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:50:50.405090 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:51:00.405045 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:51:10.405064 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:51:20.405104 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-07 22:51:23.549539 I | mvcc: store.index: compact 655 2021-05-07 22:51:23.550379 I | mvcc: finished scheduled compaction at 655 (took 593.293µs) * * ==> kernel <== * 22:51:23 up 3:30, 0 users, load average: 0.20, 0.65, 1.47 Linux kubenet-20210507224052-391940 4.9.0-15-amd64 #1 SMP Debian 4.9.258-1 (2021-03-08) x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56] <== * I0507 22:45:57.692244 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0507 22:46:39.790786 1 client.go:360] parsed scheme: "passthrough" I0507 22:46:39.790827 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0507 22:46:39.790834 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0507 22:47:19.071425 1 client.go:360] parsed scheme: "passthrough" I0507 22:47:19.071466 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0507 22:47:19.071474 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0507 22:47:57.395344 1 client.go:360] parsed scheme: "passthrough" I0507 22:47:57.395384 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0507 22:47:57.395392 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0507 22:48:29.761007 1 client.go:360] parsed scheme: "passthrough" I0507 22:48:29.761049 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0507 22:48:29.761057 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0507 22:49:03.782982 1 client.go:360] parsed scheme: "passthrough" I0507 22:49:03.783036 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0507 22:49:03.783046 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0507 22:49:41.218744 1 client.go:360] parsed scheme: "passthrough" I0507 22:49:41.218786 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0507 22:49:41.218794 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0507 22:50:19.972743 1 client.go:360] parsed scheme: "passthrough" I0507 22:50:19.972789 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0507 22:50:19.972799 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0507 22:51:02.212506 1 client.go:360] parsed scheme: "passthrough" I0507 22:51:02.212547 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0507 22:51:02.212556 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * * ==> kube-controller-manager [fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9] <== * I0507 22:41:50.933031 1 event.go:291] "Event occurred" object="kubenet-20210507224052-391940" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node kubenet-20210507224052-391940 event: Registered Node kubenet-20210507224052-391940 in Controller" I0507 22:41:50.933632 1 shared_informer.go:247] Caches are synced for service account I0507 22:41:50.940065 1 range_allocator.go:373] Set node kubenet-20210507224052-391940 PodCIDR to [10.244.0.0/24] I0507 22:41:50.941149 1 shared_informer.go:247] Caches are synced for GC E0507 22:41:50.942782 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again I0507 22:41:50.960531 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I0507 22:41:50.961422 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I0507 22:41:50.961462 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I0507 22:41:50.961556 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I0507 22:41:51.031646 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I0507 22:41:51.077601 1 shared_informer.go:247] Caches are synced for disruption I0507 22:41:51.077620 1 disruption.go:339] Sending events to api server. I0507 22:41:51.107551 1 shared_informer.go:247] Caches are synced for daemon sets I0507 22:41:51.110127 1 shared_informer.go:247] Caches are synced for stateful set I0507 22:41:51.112618 1 shared_informer.go:247] Caches are synced for resource quota I0507 22:41:51.115370 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-52sqc" E0507 22:41:51.125690 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"06247d8c-1db3-48d2-acf7-5e82a2c92b1d", ResourceVersion:"263", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63756024089, loc:(*time.Location)(0x6f31360)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000ccdf80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000ccdfc0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000ccdfe0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc000b68280), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000b5c000), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000b5c020), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000b5c060)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001095d40), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0015c13a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000c6ea80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00011a5b0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0015c13f8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again I0507 22:41:51.135465 1 shared_informer.go:247] Caches are synced for resource quota I0507 22:41:51.171149 1 shared_informer.go:247] Caches are synced for HPA I0507 22:41:51.220391 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1" I0507 22:41:51.228341 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-hcvcv" I0507 22:41:51.286342 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0507 22:41:51.538810 1 shared_informer.go:247] Caches are synced for garbage collector I0507 22:41:51.538833 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0507 22:41:51.586532 1 shared_informer.go:247] Caches are synced for garbage collector * * ==> kube-proxy [75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c] <== * I0507 22:41:51.838615 1 node.go:172] Successfully retrieved node IP: 192.168.58.2 I0507 22:41:51.838692 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.58.2), assume IPv4 operation W0507 22:41:51.858827 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy I0507 22:41:51.858924 1 server_others.go:185] Using iptables Proxier. I0507 22:41:51.859230 1 server.go:650] Version: v1.20.2 I0507 22:41:51.859758 1 conntrack.go:52] Setting nf_conntrack_max to 262144 I0507 22:41:51.859835 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I0507 22:41:51.860215 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I0507 22:41:51.860417 1 config.go:315] Starting service config controller I0507 22:41:51.860431 1 shared_informer.go:240] Waiting for caches to sync for service config I0507 22:41:51.860473 1 config.go:224] Starting endpoint slice config controller I0507 22:41:51.862530 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0507 22:41:51.960622 1 shared_informer.go:247] Caches are synced for service config I0507 22:41:51.962723 1 shared_informer.go:247] Caches are synced for endpoint slice config * * ==> kube-scheduler [148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac] <== * W0507 22:41:26.340588 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0507 22:41:26.340630 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0507 22:41:26.340640 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous. W0507 22:41:26.340648 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0507 22:41:26.437411 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 I0507 22:41:26.438179 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0507 22:41:26.438196 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0507 22:41:26.438213 1 tlsconfig.go:240] Starting DynamicServingCertificateController E0507 22:41:26.439320 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0507 22:41:26.439782 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0507 22:41:26.439905 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0507 22:41:26.440319 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0507 22:41:26.440343 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0507 22:41:26.440510 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0507 22:41:26.440621 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0507 22:41:26.440797 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0507 22:41:26.440874 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0507 22:41:26.440915 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0507 22:41:26.440974 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0507 22:41:26.441255 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0507 22:41:27.292352 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0507 22:41:27.302445 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0507 22:41:27.533297 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0507 22:41:27.661209 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" I0507 22:41:30.338340 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Fri 2021-05-07 22:40:55 UTC, end at Fri 2021-05-07 22:51:23 UTC. -- May 07 22:48:59 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:48:59.791998 1189 pod_workers.go:191] Error syncing pod 600662d5-0810-4cc4-9c1c-948a98a998f7 ("coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"af405d726c9e0d35fb2b000f1aaec5622c388c34676e221d5e97b7189291ebcf\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:49:23 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:49:23.811820 1189 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6810a2c0f0471bd72f9663395d1c014097bbeb2351c9b94c1f6b6eaf7d38baa5": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:49:23 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:49:23.811881 1189 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6810a2c0f0471bd72f9663395d1c014097bbeb2351c9b94c1f6b6eaf7d38baa5": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:49:23 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:49:23.811896 1189 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6810a2c0f0471bd72f9663395d1c014097bbeb2351c9b94c1f6b6eaf7d38baa5": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:49:23 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:49:23.811945 1189 pod_workers.go:191] Error syncing pod 600662d5-0810-4cc4-9c1c-948a98a998f7 ("coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"6810a2c0f0471bd72f9663395d1c014097bbeb2351c9b94c1f6b6eaf7d38baa5\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:49:47 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:49:47.811847 1189 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "9c6487a6ec72bf0980abc97b800237ae8322e4fd76e2c1fc66582ec5e98f5940": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:49:47 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:49:47.811934 1189 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "9c6487a6ec72bf0980abc97b800237ae8322e4fd76e2c1fc66582ec5e98f5940": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:49:47 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:49:47.811954 1189 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "9c6487a6ec72bf0980abc97b800237ae8322e4fd76e2c1fc66582ec5e98f5940": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:49:47 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:49:47.812015 1189 pod_workers.go:191] Error syncing pod 600662d5-0810-4cc4-9c1c-948a98a998f7 ("coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"9c6487a6ec72bf0980abc97b800237ae8322e4fd76e2c1fc66582ec5e98f5940\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:50:10 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:50:10.807815 1189 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "dbd9f6bc3b3b0d86982392f4ff861c1ad03ff1c9a1e0b8a8128d18a7be782bd3": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:50:10 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:50:10.807880 1189 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "dbd9f6bc3b3b0d86982392f4ff861c1ad03ff1c9a1e0b8a8128d18a7be782bd3": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:50:10 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:50:10.807897 1189 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "dbd9f6bc3b3b0d86982392f4ff861c1ad03ff1c9a1e0b8a8128d18a7be782bd3": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:50:10 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:50:10.807945 1189 pod_workers.go:191] Error syncing pod 600662d5-0810-4cc4-9c1c-948a98a998f7 ("coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"dbd9f6bc3b3b0d86982392f4ff861c1ad03ff1c9a1e0b8a8128d18a7be782bd3\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:50:32 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:50:32.803847 1189 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "570f921f9af1846f6fab5b097c730a2a4dcb02edfdab2c9028a3a1012564ce37": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:50:32 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:50:32.803913 1189 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "570f921f9af1846f6fab5b097c730a2a4dcb02edfdab2c9028a3a1012564ce37": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:50:32 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:50:32.803926 1189 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "570f921f9af1846f6fab5b097c730a2a4dcb02edfdab2c9028a3a1012564ce37": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:50:32 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:50:32.803978 1189 pod_workers.go:191] Error syncing pod 600662d5-0810-4cc4-9c1c-948a98a998f7 ("coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"570f921f9af1846f6fab5b097c730a2a4dcb02edfdab2c9028a3a1012564ce37\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:50:53 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:50:53.811802 1189 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "e2b8fbd8caf3a62df4af9cf0be123f3930a4b899e63a02799a4a6343ee338a09": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:50:53 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:50:53.811868 1189 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "e2b8fbd8caf3a62df4af9cf0be123f3930a4b899e63a02799a4a6343ee338a09": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:50:53 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:50:53.811881 1189 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "e2b8fbd8caf3a62df4af9cf0be123f3930a4b899e63a02799a4a6343ee338a09": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:50:53 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:50:53.811930 1189 pod_workers.go:191] Error syncing pod 600662d5-0810-4cc4-9c1c-948a98a998f7 ("coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"e2b8fbd8caf3a62df4af9cf0be123f3930a4b899e63a02799a4a6343ee338a09\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" May 07 22:51:17 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:51:17.795855 1189 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "69d67907d1012aa7c758c9e7d57c2bfb2ebc8aa5f335d40e74024fdb511511f7": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:51:17 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:51:17.795933 1189 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "69d67907d1012aa7c758c9e7d57c2bfb2ebc8aa5f335d40e74024fdb511511f7": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:51:17 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:51:17.795949 1189 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "69d67907d1012aa7c758c9e7d57c2bfb2ebc8aa5f335d40e74024fdb511511f7": failed to set bridge addr: could not add IP address to "cni0": permission denied May 07 22:51:17 kubenet-20210507224052-391940 kubelet[1189]: E0507 22:51:17.796005 1189 pod_workers.go:191] Error syncing pod 600662d5-0810-4cc4-9c1c-948a98a998f7 ("coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-g7c7z_kube-system(600662d5-0810-4cc4-9c1c-948a98a998f7)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"69d67907d1012aa7c758c9e7d57c2bfb2ebc8aa5f335d40e74024fdb511511f7\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" * * ==> storage-provisioner [b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4] <== * I0507 22:41:53.019418 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0507 22:41:53.027300 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0507 22:41:53.027348 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0507 22:41:53.035014 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0507 22:41:53.035091 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"104cb0df-ad28-439e-81dd-84d658c5e949", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubenet-20210507224052-391940_41e761f3-6c89-4538-961d-588f50e9062b became leader I0507 22:41:53.035146 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubenet-20210507224052-391940_41e761f3-6c89-4538-961d-588f50e9062b! I0507 22:41:53.135702 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubenet-20210507224052-391940_41e761f3-6c89-4538-961d-588f50e9062b! -- /stdout -- helpers_test.go:250: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubenet-20210507224052-391940 -n kubenet-20210507224052-391940 helpers_test.go:257: (dbg) Run: kubectl --context kubenet-20210507224052-391940 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running helpers_test.go:263: non-running pods: coredns-74ff55c5b-g7c7z helpers_test.go:265: ======> post-mortem[TestNetworkPlugins/group/kubenet]: describe non-running pods <====== helpers_test.go:268: (dbg) Run: kubectl --context kubenet-20210507224052-391940 describe pod coredns-74ff55c5b-g7c7z helpers_test.go:268: (dbg) Non-zero exit: kubectl --context kubenet-20210507224052-391940 describe pod coredns-74ff55c5b-g7c7z: exit status 1 (61.342931ms) ** stderr ** Error from server (NotFound): pods "coredns-74ff55c5b-g7c7z" not found ** /stderr ** helpers_test.go:270: kubectl --context kubenet-20210507224052-391940 describe pod coredns-74ff55c5b-g7c7z: exit status 1 helpers_test.go:171: Cleaning up "kubenet-20210507224052-391940" profile ... helpers_test.go:174: (dbg) Run: out/minikube-linux-amd64 delete -p kubenet-20210507224052-391940 helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p kubenet-20210507224052-391940: (2.7416249s) --- FAIL: TestNetworkPlugins (1853.05s) --- FAIL: TestNetworkPlugins/group (0.00s) --- SKIP: TestNetworkPlugins/group/flannel (0.00s) --- PASS: TestNetworkPlugins/group/cilium (158.04s) --- PASS: TestNetworkPlugins/group/cilium/Start (140.72s) --- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s) --- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.29s) --- PASS: TestNetworkPlugins/group/cilium/NetCatPod (8.42s) --- PASS: TestNetworkPlugins/group/cilium/DNS (0.15s) --- PASS: TestNetworkPlugins/group/cilium/Localhost (0.16s) --- PASS: TestNetworkPlugins/group/cilium/HairPin (0.14s) --- FAIL: TestNetworkPlugins/group/auto (324.35s) --- PASS: TestNetworkPlugins/group/auto/Start (147.97s) --- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s) --- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.24s) --- PASS: TestNetworkPlugins/group/auto/DNS (160.51s) --- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s) --- FAIL: TestNetworkPlugins/group/auto/HairPin (0.17s) --- PASS: TestNetworkPlugins/group/calico (164.05s) --- PASS: TestNetworkPlugins/group/calico/Start (145.50s) --- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s) --- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s) --- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.27s) --- PASS: TestNetworkPlugins/group/calico/DNS (0.18s) --- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s) --- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s) --- SKIP: TestNetworkPlugins/group/custom-weave (164.99s) --- PASS: TestNetworkPlugins/group/custom-weave/Start (152.84s) --- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.29s) --- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (8.43s) --- PASS: TestNetworkPlugins/group/enable-default-cni (158.31s) --- PASS: TestNetworkPlugins/group/enable-default-cni/Start (136.16s) --- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s) --- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (18.27s) --- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s) --- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s) --- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s) --- PASS: TestNetworkPlugins/group/kindnet (140.79s) --- PASS: TestNetworkPlugins/group/kindnet/Start (122.60s) --- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s) --- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s) --- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.26s) --- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s) --- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s) --- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s) --- PASS: TestNetworkPlugins/group/bridge (175.01s) --- PASS: TestNetworkPlugins/group/bridge/Start (163.09s) --- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s) --- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.25s) --- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s) --- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s) --- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s) --- FAIL: TestNetworkPlugins/group/false (641.72s) --- FAIL: TestNetworkPlugins/group/false/Start (637.46s) --- FAIL: TestNetworkPlugins/group/kubenet (634.28s) --- FAIL: TestNetworkPlugins/group/kubenet/Start (629.71s) FAIL Tests completed in 1h2m0.921491051s (result code 1)