=== RUN TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run: out/minikube-linux-amd64 start -p calico-210250 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker --container-runtime=docker
=== CONT TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-210250 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker --container-runtime=docker: exit status 80 (8m39.521500848s)
-- stdout --
* [calico-210250] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=15565
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting control plane node calico-210250 in cluster calico-210250
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2048MB) ...
* Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring Calico (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: default-storageclass, storage-provisioner
-- /stdout --
** stderr **
I0108 21:06:04.964327 284235 out.go:296] Setting OutFile to fd 1 ...
I0108 21:06:04.964464 284235 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:06:04.964482 284235 out.go:309] Setting ErrFile to fd 2...
I0108 21:06:04.964490 284235 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:06:04.964714 284235 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
I0108 21:06:04.965737 284235 out.go:303] Setting JSON to false
I0108 21:06:04.967253 284235 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2914,"bootTime":1673209051,"procs":695,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0108 21:06:04.967326 284235 start.go:135] virtualization: kvm guest
I0108 21:06:04.970285 284235 out.go:177] * [calico-210250] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
I0108 21:06:04.971963 284235 out.go:177] - MINIKUBE_LOCATION=15565
I0108 21:06:04.971909 284235 notify.go:220] Checking for updates...
I0108 21:06:04.973674 284235 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0108 21:06:04.975328 284235 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
I0108 21:06:04.976959 284235 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
I0108 21:06:04.978884 284235 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0108 21:06:04.980884 284235 config.go:180] Loaded profile config "cilium-210250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0108 21:06:04.980988 284235 config.go:180] Loaded profile config "kindnet-210249": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0108 21:06:04.981087 284235 config.go:180] Loaded profile config "kubernetes-upgrade-210149": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0108 21:06:04.981155 284235 driver.go:365] Setting default libvirt URI to qemu:///system
I0108 21:06:05.010745 284235 docker.go:137] docker version: linux-20.10.22
I0108 21:06:05.010868 284235 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0108 21:06:05.114121 284235 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2023-01-08 21:06:05.03270225 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0108 21:06:05.114270 284235 docker.go:254] overlay module found
I0108 21:06:05.116696 284235 out.go:177] * Using the docker driver based on user configuration
I0108 21:06:05.118459 284235 start.go:294] selected driver: docker
I0108 21:06:05.118485 284235 start.go:838] validating driver "docker" against <nil>
I0108 21:06:05.118507 284235 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0108 21:06:05.119516 284235 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0108 21:06:05.224993 284235 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2023-01-08 21:06:05.14191075 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0108 21:06:05.225133 284235 start_flags.go:303] no existing cluster config was found, will generate one from the flags
I0108 21:06:05.225334 284235 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0108 21:06:05.227816 284235 out.go:177] * Using Docker driver with root privileges
I0108 21:06:05.229570 284235 cni.go:95] Creating CNI manager for "calico"
I0108 21:06:05.229593 284235 start_flags.go:312] Found "Calico" CNI - setting NetworkPlugin=cni
I0108 21:06:05.229616 284235 start_flags.go:317] config:
{Name:calico-210250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-210250 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0108 21:06:05.231509 284235 out.go:177] * Starting control plane node calico-210250 in cluster calico-210250
I0108 21:06:05.233336 284235 cache.go:120] Beginning downloading kic base image for docker with docker
I0108 21:06:05.235256 284235 out.go:177] * Pulling base image ...
I0108 21:06:05.237019 284235 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I0108 21:06:05.237074 284235 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
I0108 21:06:05.237085 284235 cache.go:57] Caching tarball of preloaded images
I0108 21:06:05.237161 284235 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
I0108 21:06:05.237354 284235 preload.go:174] Found /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0108 21:06:05.237370 284235 cache.go:60] Finished verifying existence of preloaded tar for v1.25.3 on docker
I0108 21:06:05.237522 284235 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/config.json ...
I0108 21:06:05.237551 284235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/config.json: {Name:mk74bbdc944afd4086b40f6396812cc5bf2a8342 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 21:06:05.262623 284235 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
I0108 21:06:05.262668 284235 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
I0108 21:06:05.262695 284235 cache.go:193] Successfully downloaded all kic artifacts
I0108 21:06:05.262734 284235 start.go:364] acquiring machines lock for calico-210250: {Name:mk33a968711aebb5a4baeb72feae6942b04f9136 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0108 21:06:05.262931 284235 start.go:368] acquired machines lock for "calico-210250" in 176.045µs
I0108 21:06:05.262963 284235 start.go:93] Provisioning new machine with config: &{Name:calico-210250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-210250 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0108 21:06:05.263086 284235 start.go:125] createHost starting for "" (driver="docker")
I0108 21:06:05.267425 284235 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
I0108 21:06:05.267694 284235 start.go:159] libmachine.API.Create for "calico-210250" (driver="docker")
I0108 21:06:05.267726 284235 client.go:168] LocalClient.Create starting
I0108 21:06:05.267799 284235 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem
I0108 21:06:05.267835 284235 main.go:134] libmachine: Decoding PEM data...
I0108 21:06:05.267850 284235 main.go:134] libmachine: Parsing certificate...
I0108 21:06:05.267902 284235 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem
I0108 21:06:05.267921 284235 main.go:134] libmachine: Decoding PEM data...
I0108 21:06:05.267934 284235 main.go:134] libmachine: Parsing certificate...
I0108 21:06:05.268284 284235 cli_runner.go:164] Run: docker network inspect calico-210250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0108 21:06:05.292868 284235 cli_runner.go:211] docker network inspect calico-210250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0108 21:06:05.292938 284235 network_create.go:272] running [docker network inspect calico-210250] to gather additional debugging logs...
I0108 21:06:05.292956 284235 cli_runner.go:164] Run: docker network inspect calico-210250
W0108 21:06:05.316455 284235 cli_runner.go:211] docker network inspect calico-210250 returned with exit code 1
I0108 21:06:05.316489 284235 network_create.go:275] error running [docker network inspect calico-210250]: docker network inspect calico-210250: exit status 1
stdout:
[]
stderr:
Error: No such network: calico-210250
I0108 21:06:05.316507 284235 network_create.go:277] output of [docker network inspect calico-210250]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: calico-210250
** /stderr **
I0108 21:06:05.316583 284235 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0108 21:06:05.343013 284235 network.go:244] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-5119f095d2f2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:7e:84:74:d1}}
I0108 21:06:05.343811 284235 network.go:244] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-b7eaf529ac5a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ae:8d:ad:e1}}
I0108 21:06:05.344877 284235 network.go:244] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-236ac17fc4da IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d6:40:76:59}}
I0108 21:06:05.346523 284235 network.go:306] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc000596088] misses:0}
I0108 21:06:05.346561 284235 network.go:239] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0108 21:06:05.346575 284235 network_create.go:115] attempt to create docker network calico-210250 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I0108 21:06:05.346643 284235 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-210250 calico-210250
I0108 21:06:05.416666 284235 network_create.go:99] docker network calico-210250 192.168.76.0/24 created
I0108 21:06:05.416708 284235 kic.go:106] calculated static IP "192.168.76.2" for the "calico-210250" container
I0108 21:06:05.416765 284235 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0108 21:06:05.452872 284235 cli_runner.go:164] Run: docker volume create calico-210250 --label name.minikube.sigs.k8s.io=calico-210250 --label created_by.minikube.sigs.k8s.io=true
I0108 21:06:05.481775 284235 oci.go:103] Successfully created a docker volume calico-210250
I0108 21:06:05.481857 284235 cli_runner.go:164] Run: docker run --rm --name calico-210250-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-210250 --entrypoint /usr/bin/test -v calico-210250:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib
I0108 21:06:06.148468 284235 oci.go:107] Successfully prepared a docker volume calico-210250
I0108 21:06:06.148500 284235 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I0108 21:06:06.148527 284235 kic.go:179] Starting extracting preloaded images to volume ...
I0108 21:06:06.148601 284235 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-210250:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir
I0108 21:06:09.233369 284235 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-210250:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir: (3.084675909s)
I0108 21:06:09.233414 284235 kic.go:188] duration metric: took 3.084889 seconds to extract preloaded images to volume
W0108 21:06:09.233577 284235 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0108 21:06:09.233701 284235 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0108 21:06:09.366032 284235 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-210250 --name calico-210250 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-210250 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-210250 --network calico-210250 --ip 192.168.76.2 --volume calico-210250:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c
I0108 21:06:09.902180 284235 cli_runner.go:164] Run: docker container inspect calico-210250 --format={{.State.Running}}
I0108 21:06:09.937803 284235 cli_runner.go:164] Run: docker container inspect calico-210250 --format={{.State.Status}}
I0108 21:06:09.963685 284235 cli_runner.go:164] Run: docker exec calico-210250 stat /var/lib/dpkg/alternatives/iptables
I0108 21:06:10.028047 284235 oci.go:144] the created container "calico-210250" has a running status.
I0108 21:06:10.028089 284235 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/calico-210250/id_rsa...
I0108 21:06:10.391959 284235 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15565-3617/.minikube/machines/calico-210250/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0108 21:06:10.491720 284235 cli_runner.go:164] Run: docker container inspect calico-210250 --format={{.State.Status}}
I0108 21:06:10.526083 284235 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0108 21:06:10.526104 284235 kic_runner.go:114] Args: [docker exec --privileged calico-210250 chown docker:docker /home/docker/.ssh/authorized_keys]
I0108 21:06:10.609112 284235 cli_runner.go:164] Run: docker container inspect calico-210250 --format={{.State.Status}}
I0108 21:06:10.640487 284235 machine.go:88] provisioning docker machine ...
I0108 21:06:10.640527 284235 ubuntu.go:169] provisioning hostname "calico-210250"
I0108 21:06:10.640578 284235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-210250
I0108 21:06:10.673585 284235 main.go:134] libmachine: Using SSH client type: native
I0108 21:06:10.673820 284235 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil> [] 0s} 127.0.0.1 33004 <nil> <nil>}
I0108 21:06:10.673926 284235 main.go:134] libmachine: About to run SSH command:
sudo hostname calico-210250 && echo "calico-210250" | sudo tee /etc/hostname
I0108 21:06:10.812158 284235 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-210250
I0108 21:06:10.812241 284235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-210250
I0108 21:06:10.837685 284235 main.go:134] libmachine: Using SSH client type: native
I0108 21:06:10.837846 284235 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil> [] 0s} 127.0.0.1 33004 <nil> <nil>}
I0108 21:06:10.837865 284235 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\scalico-210250' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-210250/g' /etc/hosts;
else
echo '127.0.1.1 calico-210250' | sudo tee -a /etc/hosts;
fi
fi
I0108 21:06:10.958756 284235 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0108 21:06:10.958788 284235 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
I0108 21:06:10.958812 284235 ubuntu.go:177] setting up certificates
I0108 21:06:10.958820 284235 provision.go:83] configureAuth start
I0108 21:06:10.958869 284235 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-210250
I0108 21:06:10.989936 284235 provision.go:138] copyHostCerts
I0108 21:06:10.990004 284235 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
I0108 21:06:10.990014 284235 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
I0108 21:06:10.990100 284235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1082 bytes)
I0108 21:06:10.990236 284235 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
I0108 21:06:10.990246 284235 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
I0108 21:06:10.990290 284235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
I0108 21:06:10.990356 284235 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
I0108 21:06:10.990362 284235 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
I0108 21:06:10.990396 284235 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
I0108 21:06:10.990451 284235 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.calico-210250 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube calico-210250]
I0108 21:06:11.093249 284235 provision.go:172] copyRemoteCerts
I0108 21:06:11.093320 284235 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0108 21:06:11.093368 284235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-210250
I0108 21:06:11.122394 284235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/calico-210250/id_rsa Username:docker}
I0108 21:06:11.215432 284235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
I0108 21:06:11.237833 284235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0108 21:06:11.260118 284235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0108 21:06:11.282346 284235 provision.go:86] duration metric: configureAuth took 323.515355ms
I0108 21:06:11.282381 284235 ubuntu.go:193] setting minikube options for container-runtime
I0108 21:06:11.282585 284235 config.go:180] Loaded profile config "calico-210250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0108 21:06:11.282699 284235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-210250
I0108 21:06:11.309048 284235 main.go:134] libmachine: Using SSH client type: native
I0108 21:06:11.309225 284235 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil> [] 0s} 127.0.0.1 33004 <nil> <nil>}
I0108 21:06:11.309248 284235 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0108 21:06:11.431259 284235 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
I0108 21:06:11.431283 284235 ubuntu.go:71] root file system type: overlay
I0108 21:06:11.431481 284235 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0108 21:06:11.431549 284235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-210250
I0108 21:06:11.458436 284235 main.go:134] libmachine: Using SSH client type: native
I0108 21:06:11.458629 284235 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil> [] 0s} 127.0.0.1 33004 <nil> <nil>}
I0108 21:06:11.458773 284235 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0108 21:06:11.600696 284235 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0108 21:06:11.600797 284235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-210250
I0108 21:06:11.642182 284235 main.go:134] libmachine: Using SSH client type: native
I0108 21:06:11.642389 284235 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil> [] 0s} 127.0.0.1 33004 <nil> <nil>}
I0108 21:06:11.642418 284235 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0108 21:06:12.604207 284235 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2022-10-25 18:00:04.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-01-08 21:06:11.598626046 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0108 21:06:12.604245 284235 machine.go:91] provisioned docker machine in 1.963732623s
I0108 21:06:12.604259 284235 client.go:171] LocalClient.Create took 7.336525384s
I0108 21:06:12.604278 284235 start.go:167] duration metric: libmachine.API.Create for "calico-210250" took 7.336583773s
I0108 21:06:12.604288 284235 start.go:300] post-start starting for "calico-210250" (driver="docker")
I0108 21:06:12.604301 284235 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0108 21:06:12.604356 284235 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0108 21:06:12.604401 284235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-210250
I0108 21:06:12.642495 284235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/calico-210250/id_rsa Username:docker}
I0108 21:06:12.731130 284235 ssh_runner.go:195] Run: cat /etc/os-release
I0108 21:06:12.734260 284235 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0108 21:06:12.734285 284235 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0108 21:06:12.734297 284235 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0108 21:06:12.734303 284235 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0108 21:06:12.734313 284235 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
I0108 21:06:12.734364 284235 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
I0108 21:06:12.734429 284235 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/100972.pem -> 100972.pem in /etc/ssl/certs
I0108 21:06:12.734499 284235 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0108 21:06:12.742936 284235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/100972.pem --> /etc/ssl/certs/100972.pem (1708 bytes)
I0108 21:06:12.763136 284235 start.go:303] post-start completed in 158.828193ms
I0108 21:06:12.763630 284235 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-210250
I0108 21:06:12.802846 284235 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/config.json ...
I0108 21:06:12.803119 284235 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0108 21:06:12.803175 284235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-210250
I0108 21:06:12.837272 284235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/calico-210250/id_rsa Username:docker}
I0108 21:06:12.919452 284235 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0108 21:06:12.923976 284235 start.go:128] duration metric: createHost completed in 7.660877504s
I0108 21:06:12.924014 284235 start.go:83] releasing machines lock for "calico-210250", held for 7.661068605s
I0108 21:06:12.924102 284235 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-210250
I0108 21:06:12.978835 284235 ssh_runner.go:195] Run: cat /version.json
I0108 21:06:12.978889 284235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-210250
I0108 21:06:12.979152 284235 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0108 21:06:12.979198 284235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-210250
I0108 21:06:13.022052 284235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/calico-210250/id_rsa Username:docker}
I0108 21:06:13.023225 284235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/calico-210250/id_rsa Username:docker}
I0108 21:06:13.106200 284235 ssh_runner.go:195] Run: systemctl --version
I0108 21:06:13.136887 284235 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0108 21:06:13.144670 284235 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
I0108 21:06:13.158716 284235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 21:06:13.249164 284235 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I0108 21:06:13.329376 284235 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0108 21:06:13.339816 284235 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0108 21:06:13.339899 284235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0108 21:06:13.350280 284235 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0108 21:06:13.364718 284235 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0108 21:06:13.457002 284235 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0108 21:06:13.538775 284235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 21:06:13.624723 284235 ssh_runner.go:195] Run: sudo systemctl restart docker
I0108 21:06:13.931077 284235 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0108 21:06:14.088157 284235 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 21:06:14.187818 284235 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I0108 21:06:14.199078 284235 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0108 21:06:14.199145 284235 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0108 21:06:14.203660 284235 start.go:472] Will wait 60s for crictl version
I0108 21:06:14.203728 284235 ssh_runner.go:195] Run: sudo crictl version
I0108 21:06:14.240663 284235 start.go:481] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.21
RuntimeApiVersion: 1.41.0
I0108 21:06:14.240732 284235 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0108 21:06:14.275839 284235 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0108 21:06:14.317338 284235 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
I0108 21:06:14.317442 284235 cli_runner.go:164] Run: docker network inspect calico-210250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0108 21:06:14.348753 284235 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0108 21:06:14.352183 284235 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0108 21:06:14.364167 284235 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I0108 21:06:14.364224 284235 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0108 21:06:14.395917 284235 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0108 21:06:14.395957 284235 docker.go:543] Images already preloaded, skipping extraction
I0108 21:06:14.396016 284235 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0108 21:06:14.427063 284235 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0108 21:06:14.427091 284235 cache_images.go:84] Images are preloaded, skipping loading
I0108 21:06:14.427141 284235 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0108 21:06:14.528335 284235 cni.go:95] Creating CNI manager for "calico"
I0108 21:06:14.528383 284235 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0108 21:06:14.528414 284235 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-210250 NodeName:calico-210250 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
I0108 21:06:14.528580 284235 kubeadm.go:163] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "calico-210250"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.25.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0108 21:06:14.528665 284235 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=calico-210250 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.25.3 ClusterName:calico-210250 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
I0108 21:06:14.528711 284235 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
I0108 21:06:14.577039 284235 binaries.go:44] Found k8s binaries, skipping transfer
I0108 21:06:14.577128 284235 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0108 21:06:14.585868 284235 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (475 bytes)
I0108 21:06:14.603052 284235 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0108 21:06:14.618218 284235 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2036 bytes)
I0108 21:06:14.634227 284235 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0108 21:06:14.637818 284235 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0108 21:06:14.650704 284235 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250 for IP: 192.168.76.2
I0108 21:06:14.650825 284235 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
I0108 21:06:14.650875 284235 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
I0108 21:06:14.650935 284235 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/client.key
I0108 21:06:14.650947 284235 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/client.crt with IP's: []
I0108 21:06:14.772241 284235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/client.crt ...
I0108 21:06:14.772279 284235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/client.crt: {Name:mkf1b12b41bd74e661d9517f3328b24b211e5b3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 21:06:14.772581 284235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/client.key ...
I0108 21:06:14.772600 284235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/client.key: {Name:mk244b5a74b59ed13470aa7fd8d4f05b89d4629b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 21:06:14.772735 284235 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/apiserver.key.31bdca25
I0108 21:06:14.772758 284235 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0108 21:06:14.911560 284235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/apiserver.crt.31bdca25 ...
I0108 21:06:14.911596 284235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/apiserver.crt.31bdca25: {Name:mk3790d7446d156768ccf38c8fde563483d32624 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 21:06:14.911810 284235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/apiserver.key.31bdca25 ...
I0108 21:06:14.911827 284235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/apiserver.key.31bdca25: {Name:mke15d4d363acabb76a7d345b1c11b79d6e987bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 21:06:14.911940 284235 certs.go:320] copying /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/apiserver.crt
I0108 21:06:14.912008 284235 certs.go:324] copying /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/apiserver.key
I0108 21:06:14.912047 284235 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/proxy-client.key
I0108 21:06:14.912060 284235 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/proxy-client.crt with IP's: []
I0108 21:06:15.215293 284235 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/proxy-client.crt ...
I0108 21:06:15.215327 284235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/proxy-client.crt: {Name:mk93602b4e613614102a69029715d250191f9e09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 21:06:15.215543 284235 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/proxy-client.key ...
I0108 21:06:15.215558 284235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/proxy-client.key: {Name:mk952d26aae3080caf1de59864fc7fc23202d5b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 21:06:15.215738 284235 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10097.pem (1338 bytes)
W0108 21:06:15.215770 284235 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10097_empty.pem, impossibly tiny 0 bytes
I0108 21:06:15.215780 284235 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1679 bytes)
I0108 21:06:15.215804 284235 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1082 bytes)
I0108 21:06:15.215825 284235 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
I0108 21:06:15.215846 284235 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
I0108 21:06:15.215893 284235 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/100972.pem (1708 bytes)
I0108 21:06:15.216556 284235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0108 21:06:15.239184 284235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0108 21:06:15.260450 284235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0108 21:06:15.281060 284235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/calico-210250/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0108 21:06:15.302465 284235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0108 21:06:15.323987 284235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0108 21:06:15.344563 284235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0108 21:06:15.367854 284235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0108 21:06:15.393092 284235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10097.pem --> /usr/share/ca-certificates/10097.pem (1338 bytes)
I0108 21:06:15.416207 284235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/100972.pem --> /usr/share/ca-certificates/100972.pem (1708 bytes)
I0108 21:06:15.439077 284235 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0108 21:06:15.461004 284235 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0108 21:06:15.476947 284235 ssh_runner.go:195] Run: openssl version
I0108 21:06:15.482215 284235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100972.pem && ln -fs /usr/share/ca-certificates/100972.pem /etc/ssl/certs/100972.pem"
I0108 21:06:15.492524 284235 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100972.pem
I0108 21:06:15.496736 284235 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 8 20:38 /usr/share/ca-certificates/100972.pem
I0108 21:06:15.496792 284235 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100972.pem
I0108 21:06:15.503598 284235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100972.pem /etc/ssl/certs/3ec20f2e.0"
I0108 21:06:15.513108 284235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0108 21:06:15.521467 284235 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0108 21:06:15.524997 284235 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 8 20:27 /usr/share/ca-certificates/minikubeCA.pem
I0108 21:06:15.525051 284235 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0108 21:06:15.531421 284235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0108 21:06:15.541218 284235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10097.pem && ln -fs /usr/share/ca-certificates/10097.pem /etc/ssl/certs/10097.pem"
I0108 21:06:15.553232 284235 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10097.pem
I0108 21:06:15.558534 284235 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 8 20:38 /usr/share/ca-certificates/10097.pem
I0108 21:06:15.558606 284235 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10097.pem
I0108 21:06:15.566893 284235 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10097.pem /etc/ssl/certs/51391683.0"
I0108 21:06:15.576305 284235 kubeadm.go:396] StartCluster: {Name:calico-210250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-210250 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0108 21:06:15.576424 284235 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0108 21:06:15.602969 284235 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0108 21:06:15.611473 284235 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0108 21:06:15.619316 284235 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0108 21:06:15.619382 284235 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0108 21:06:15.627954 284235 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0108 21:06:15.627991 284235 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0108 21:06:15.680274 284235 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
I0108 21:06:15.680323 284235 kubeadm.go:317] [preflight] Running pre-flight checks
I0108 21:06:15.729894 284235 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
I0108 21:06:15.729997 284235 kubeadm.go:317] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1025-gcp[0m
I0108 21:06:15.730090 284235 kubeadm.go:317] [0;37mOS[0m: [0;32mLinux[0m
I0108 21:06:15.730157 284235 kubeadm.go:317] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0108 21:06:15.730208 284235 kubeadm.go:317] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0108 21:06:15.730283 284235 kubeadm.go:317] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0108 21:06:15.730382 284235 kubeadm.go:317] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0108 21:06:15.730465 284235 kubeadm.go:317] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0108 21:06:15.730535 284235 kubeadm.go:317] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0108 21:06:15.730590 284235 kubeadm.go:317] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0108 21:06:15.730704 284235 kubeadm.go:317] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0108 21:06:15.730781 284235 kubeadm.go:317] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0108 21:06:15.805835 284235 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I0108 21:06:15.805967 284235 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0108 21:06:15.806138 284235 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0108 21:06:15.963674 284235 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0108 21:06:15.966303 284235 out.go:204] - Generating certificates and keys ...
I0108 21:06:15.966450 284235 kubeadm.go:317] [certs] Using existing ca certificate authority
I0108 21:06:15.966560 284235 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
I0108 21:06:16.186041 284235 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
I0108 21:06:16.330719 284235 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
I0108 21:06:16.489955 284235 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
I0108 21:06:16.677392 284235 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
I0108 21:06:16.913850 284235 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
I0108 21:06:16.913958 284235 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [calico-210250 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I0108 21:06:17.036736 284235 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
I0108 21:06:17.037063 284235 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [calico-210250 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I0108 21:06:17.111238 284235 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
I0108 21:06:17.330420 284235 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
I0108 21:06:17.510279 284235 kubeadm.go:317] [certs] Generating "sa" key and public key
I0108 21:06:17.510362 284235 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0108 21:06:17.642737 284235 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
I0108 21:06:17.918135 284235 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0108 21:06:18.000721 284235 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0108 21:06:18.081343 284235 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0108 21:06:18.114511 284235 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0108 21:06:18.119931 284235 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0108 21:06:18.120018 284235 kubeadm.go:317] [kubelet-start] Starting the kubelet
I0108 21:06:18.219491 284235 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0108 21:06:18.244030 284235 out.go:204] - Booting up control plane ...
I0108 21:06:18.244241 284235 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0108 21:06:18.244362 284235 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0108 21:06:18.244461 284235 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0108 21:06:18.244591 284235 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0108 21:06:18.244813 284235 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0108 21:06:29.243573 284235 kubeadm.go:317] [apiclient] All control plane components are healthy after 11.003004 seconds
I0108 21:06:29.243702 284235 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0108 21:06:29.256957 284235 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0108 21:06:29.779839 284235 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
I0108 21:06:29.780103 284235 kubeadm.go:317] [mark-control-plane] Marking the node calico-210250 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0108 21:06:30.290232 284235 kubeadm.go:317] [bootstrap-token] Using token: 56pr3x.r1f115b05u8vd4v4
I0108 21:06:30.292577 284235 out.go:204] - Configuring RBAC rules ...
I0108 21:06:30.292737 284235 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0108 21:06:30.296566 284235 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0108 21:06:30.304953 284235 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0108 21:06:30.307846 284235 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0108 21:06:30.312332 284235 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0108 21:06:30.316013 284235 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0108 21:06:30.325761 284235 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0108 21:06:30.573607 284235 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
I0108 21:06:30.705518 284235 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
I0108 21:06:30.707314 284235 kubeadm.go:317]
I0108 21:06:30.707397 284235 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
I0108 21:06:30.707404 284235 kubeadm.go:317]
I0108 21:06:30.707483 284235 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
I0108 21:06:30.707490 284235 kubeadm.go:317]
I0108 21:06:30.707517 284235 kubeadm.go:317] mkdir -p $HOME/.kube
I0108 21:06:30.712888 284235 kubeadm.go:317] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0108 21:06:30.712968 284235 kubeadm.go:317] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0108 21:06:30.712975 284235 kubeadm.go:317]
I0108 21:06:30.713031 284235 kubeadm.go:317] Alternatively, if you are the root user, you can run:
I0108 21:06:30.713037 284235 kubeadm.go:317]
I0108 21:06:30.713092 284235 kubeadm.go:317] export KUBECONFIG=/etc/kubernetes/admin.conf
I0108 21:06:30.713099 284235 kubeadm.go:317]
I0108 21:06:30.713154 284235 kubeadm.go:317] You should now deploy a pod network to the cluster.
I0108 21:06:30.713244 284235 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0108 21:06:30.713323 284235 kubeadm.go:317] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0108 21:06:30.713330 284235 kubeadm.go:317]
I0108 21:06:30.713531 284235 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
I0108 21:06:30.713628 284235 kubeadm.go:317] and service account keys on each node and then running the following as root:
I0108 21:06:30.713635 284235 kubeadm.go:317]
I0108 21:06:30.713744 284235 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 56pr3x.r1f115b05u8vd4v4 \
I0108 21:06:30.713873 284235 kubeadm.go:317] --discovery-token-ca-cert-hash sha256:53d3d01a7c329d412bf1c80903ed320c9f463f6772736578c4d8277e35e7ffe8 \
I0108 21:06:30.713901 284235 kubeadm.go:317] --control-plane
I0108 21:06:30.713906 284235 kubeadm.go:317]
I0108 21:06:30.714007 284235 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
I0108 21:06:30.714013 284235 kubeadm.go:317]
I0108 21:06:30.714114 284235 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 56pr3x.r1f115b05u8vd4v4 \
I0108 21:06:30.714235 284235 kubeadm.go:317] --discovery-token-ca-cert-hash sha256:53d3d01a7c329d412bf1c80903ed320c9f463f6772736578c4d8277e35e7ffe8
I0108 21:06:30.722556 284235 kubeadm.go:317] W0108 21:06:15.669839 1191 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0108 21:06:30.722837 284235 kubeadm.go:317] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
I0108 21:06:30.722967 284235 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0108 21:06:30.723064 284235 cni.go:95] Creating CNI manager for "calico"
I0108 21:06:30.768258 284235 out.go:177] * Configuring Calico (Container Networking Interface) ...
I0108 21:06:30.772815 284235 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
I0108 21:06:30.772899 284235 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202045 bytes)
I0108 21:06:30.805280 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0108 21:06:32.484871 284235 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.679551671s)
I0108 21:06:32.484908 284235 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0108 21:06:32.485023 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:32.485116 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286 minikube.k8s.io/name=calico-210250 minikube.k8s.io/updated_at=2023_01_08T21_06_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:32.589400 284235 ops.go:34] apiserver oom_adj: -16
I0108 21:06:32.589957 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:33.194547 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:33.694061 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:34.194905 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:34.694794 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:35.194962 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:35.694876 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:36.194053 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:36.694070 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:37.194779 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:37.693964 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:38.194235 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:38.694600 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:39.194469 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:39.694596 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:40.194433 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:40.694327 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:41.194610 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:41.694050 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:42.194778 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:42.694074 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:43.194220 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:43.694715 284235 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 21:06:43.791665 284235 kubeadm.go:1067] duration metric: took 11.306676744s to wait for elevateKubeSystemPrivileges.
I0108 21:06:43.791704 284235 kubeadm.go:398] StartCluster complete in 28.215406357s
I0108 21:06:43.791727 284235 settings.go:142] acquiring lock: {Name:mke11357c99840e02827352421680c460e41a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 21:06:43.791844 284235 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15565-3617/kubeconfig
I0108 21:06:43.794920 284235 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3617/kubeconfig: {Name:mk9a01b7650dc717a8be53d9d847d90c4eca404d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 21:06:44.322276 284235 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-210250" rescaled to 1
I0108 21:06:44.322345 284235 start.go:212] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0108 21:06:44.324355 284235 out.go:177] * Verifying Kubernetes components...
I0108 21:06:44.322401 284235 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0108 21:06:44.322422 284235 addons.go:486] enableAddons start: toEnable=map[], additional=[]
I0108 21:06:44.322589 284235 config.go:180] Loaded profile config "calico-210250": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0108 21:06:44.326359 284235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0108 21:06:44.326404 284235 addons.go:65] Setting default-storageclass=true in profile "calico-210250"
I0108 21:06:44.326432 284235 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-210250"
I0108 21:06:44.326404 284235 addons.go:65] Setting storage-provisioner=true in profile "calico-210250"
I0108 21:06:44.326582 284235 addons.go:227] Setting addon storage-provisioner=true in "calico-210250"
W0108 21:06:44.326592 284235 addons.go:236] addon storage-provisioner should already be in state true
I0108 21:06:44.326718 284235 host.go:66] Checking if "calico-210250" exists ...
I0108 21:06:44.326876 284235 cli_runner.go:164] Run: docker container inspect calico-210250 --format={{.State.Status}}
I0108 21:06:44.327147 284235 cli_runner.go:164] Run: docker container inspect calico-210250 --format={{.State.Status}}
I0108 21:06:44.343902 284235 node_ready.go:35] waiting up to 5m0s for node "calico-210250" to be "Ready" ...
I0108 21:06:44.347656 284235 node_ready.go:49] node "calico-210250" has status "Ready":"True"
I0108 21:06:44.347681 284235 node_ready.go:38] duration metric: took 3.752369ms waiting for node "calico-210250" to be "Ready" ...
I0108 21:06:44.347692 284235 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0108 21:06:44.363617 284235 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace to be "Ready" ...
I0108 21:06:44.367875 284235 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0108 21:06:44.370164 284235 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0108 21:06:44.370188 284235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0108 21:06:44.370255 284235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-210250
I0108 21:06:44.368588 284235 addons.go:227] Setting addon default-storageclass=true in "calico-210250"
W0108 21:06:44.370401 284235 addons.go:236] addon default-storageclass should already be in state true
I0108 21:06:44.370434 284235 host.go:66] Checking if "calico-210250" exists ...
I0108 21:06:44.370933 284235 cli_runner.go:164] Run: docker container inspect calico-210250 --format={{.State.Status}}
I0108 21:06:44.409319 284235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/calico-210250/id_rsa Username:docker}
I0108 21:06:44.410745 284235 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0108 21:06:44.410771 284235 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0108 21:06:44.410824 284235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-210250
I0108 21:06:44.424643 284235 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0108 21:06:44.443347 284235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/calico-210250/id_rsa Username:docker}
I0108 21:06:44.585328 284235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0108 21:06:44.693468 284235 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0108 21:06:46.381287 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:06:47.366676 284235 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.941960617s)
I0108 21:06:47.366707 284235 start.go:826] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
I0108 21:06:47.504519 284235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.81101755s)
I0108 21:06:47.504566 284235 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.919212585s)
I0108 21:06:47.507001 284235 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
I0108 21:06:47.509819 284235 addons.go:488] enableAddons completed in 3.187380749s
I0108 21:06:48.877188 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:06:51.375668 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:06:53.376476 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:06:55.376556 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:06:57.377029 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:06:59.883052 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:02.379427 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:04.875716 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:07.375083 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:09.376374 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:11.377624 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:13.875787 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:15.876063 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:18.375717 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:20.379372 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:22.874737 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:24.877794 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:26.883296 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:29.376542 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:31.879482 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:34.376377 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:36.879706 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:39.377245 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:41.875707 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:43.876298 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:46.375348 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:48.377361 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:50.875511 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:52.880751 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:55.385919 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:07:57.877486 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:00.376037 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:02.876084 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:04.879505 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:07.376241 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:09.384875 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:11.875704 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:13.876379 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:16.375094 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:18.382475 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:20.875733 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:22.881145 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:25.375719 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:27.375776 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:29.376540 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:31.875804 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:34.375789 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:36.375857 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:38.875904 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:41.375046 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:43.375434 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:45.375633 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:47.875417 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:49.875661 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:52.375384 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:54.876163 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:57.375755 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:08:59.874246 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:01.875316 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:03.876049 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:06.375613 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:08.877366 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:11.374811 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:13.376418 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:15.875699 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:17.876398 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:20.375685 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:22.376061 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:24.875561 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:27.374738 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:29.375039 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:31.875443 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:33.876150 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:36.376223 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:38.376455 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:40.875425 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:42.876225 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:45.374802 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:47.875491 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:49.875907 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:52.375436 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:54.376389 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:56.874648 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:09:58.876746 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:01.375597 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:03.875095 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:05.876831 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:08.375645 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:10.376097 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:12.876204 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:15.375127 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:17.875365 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:19.876214 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:21.876391 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:24.374974 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:26.375077 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:28.375822 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:30.876217 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:33.376170 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:35.875713 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:38.377164 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:40.875743 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:43.374930 284235 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:44.378821 284235 pod_ready.go:81] duration metric: took 4m0.015164581s waiting for pod "calico-kube-controllers-7df895d496-vj2pb" in "kube-system" namespace to be "Ready" ...
E0108 21:10:44.378846 284235 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
I0108 21:10:44.378854 284235 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-n4d5j" in "kube-system" namespace to be "Ready" ...
I0108 21:10:46.392165 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:48.890860 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:51.392335 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:53.392475 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:55.891569 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:10:57.893277 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:00.391195 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:02.391436 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:04.392498 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:06.393697 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:08.891233 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:10.891432 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:12.892624 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:15.391949 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:17.890862 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:19.891586 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:21.891676 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:24.392530 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:26.890545 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:28.893857 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:31.391278 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:33.391941 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:35.393350 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:37.401738 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:39.891525 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:41.891648 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:44.392223 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:46.890915 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:48.901678 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:51.391817 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:53.890919 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:55.891209 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:57.891572 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:11:59.893128 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:02.393345 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:04.891874 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:07.391459 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:09.394614 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:11.891457 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:13.892034 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:15.892422 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:18.391218 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:20.391695 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:22.891655 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:24.892380 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:26.892514 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:29.391795 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:31.893279 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:34.392420 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:36.891281 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:38.892843 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:41.392553 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:43.892420 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:46.391788 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:48.392871 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:50.893146 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:53.391493 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:55.391632 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:57.392305 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:12:59.891719 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:01.892327 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:04.392259 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:06.890554 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:08.891098 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:10.892839 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:13.391251 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:15.393739 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:17.892176 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:20.390488 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:22.391583 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:24.391956 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:26.392449 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:28.894730 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:31.391545 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:33.391674 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:35.392391 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:37.892193 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:40.391173 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:42.391999 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:44.892172 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:47.391329 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:49.391416 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:51.393029 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:53.891297 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:55.891816 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:13:58.391860 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:14:00.891715 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:14:03.392513 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:14:05.393383 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:14:07.889920 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:14:09.890461 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:14:11.891967 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:14:14.390718 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:14:16.390997 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:14:18.392071 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:14:20.392142 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:14:22.892071 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:14:25.391216 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:14:27.892445 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:14:30.392255 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:14:32.392437 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:14:34.891246 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:14:36.892819 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:14:39.391374 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:14:41.391514 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:14:43.391555 284235 pod_ready.go:102] pod "calico-node-n4d5j" in "kube-system" namespace has status "Ready":"False"
I0108 21:14:44.397906 284235 pod_ready.go:81] duration metric: took 4m0.019040692s waiting for pod "calico-node-n4d5j" in "kube-system" namespace to be "Ready" ...
E0108 21:14:44.397933 284235 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
I0108 21:14:44.397951 284235 pod_ready.go:38] duration metric: took 8m0.050245128s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0108 21:14:44.400608 284235 out.go:177]
W0108 21:14:44.402964 284235 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
W0108 21:14:44.402986 284235 out.go:239] *
*
W0108 21:14:44.403966 284235 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0108 21:14:44.408288 284235 out.go:177]
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (519.56s)