=== RUN TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run: out/minikube-linux-amd64 start -p calico-104157 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker --container-runtime=docker
E0114 10:46:35.321074 11171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/skaffold-103924/client.crt: no such file or directory
=== CONT TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-104157 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker --container-runtime=docker: exit status 80 (8m39.182966149s)
-- stdout --
* [calico-104157] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=15642
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/15642-4687/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-4687/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting control plane node calico-104157 in cluster calico-104157
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2048MB) ...
* Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring Calico (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
-- /stdout --
** stderr **
I0114 10:46:21.571991 292144 out.go:296] Setting OutFile to fd 1 ...
I0114 10:46:21.572209 292144 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0114 10:46:21.572215 292144 out.go:309] Setting ErrFile to fd 2...
I0114 10:46:21.572222 292144 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0114 10:46:21.572364 292144 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-4687/.minikube/bin
I0114 10:46:21.573128 292144 out.go:303] Setting JSON to false
I0114 10:46:21.575262 292144 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8931,"bootTime":1673684251,"procs":991,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0114 10:46:21.575345 292144 start.go:135] virtualization: kvm guest
I0114 10:46:21.578267 292144 out.go:177] * [calico-104157] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
I0114 10:46:21.579906 292144 out.go:177] - MINIKUBE_LOCATION=15642
I0114 10:46:21.581664 292144 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0114 10:46:21.580906 292144 notify.go:220] Checking for updates...
I0114 10:46:21.587018 292144 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15642-4687/kubeconfig
I0114 10:46:21.589065 292144 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-4687/.minikube
I0114 10:46:21.590761 292144 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0114 10:46:21.592772 292144 config.go:180] Loaded profile config "cilium-104157": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0114 10:46:21.592946 292144 config.go:180] Loaded profile config "kindnet-104157": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0114 10:46:21.593063 292144 config.go:180] Loaded profile config "kubernetes-upgrade-104134": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0114 10:46:21.593131 292144 driver.go:365] Setting default libvirt URI to qemu:///system
I0114 10:46:21.628882 292144 docker.go:138] docker version: linux-20.10.22
I0114 10:46:21.628983 292144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0114 10:46:21.743003 292144 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2023-01-14 10:46:21.650511916 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660674048 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0114 10:46:21.743135 292144 docker.go:255] overlay module found
I0114 10:46:21.745437 292144 out.go:177] * Using the docker driver based on user configuration
I0114 10:46:21.746771 292144 start.go:294] selected driver: docker
I0114 10:46:21.746784 292144 start.go:838] validating driver "docker" against <nil>
I0114 10:46:21.746816 292144 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0114 10:46:21.747962 292144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0114 10:46:21.860096 292144 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2023-01-14 10:46:21.771447902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660674048 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0114 10:46:21.860227 292144 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0114 10:46:21.860382 292144 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0114 10:46:21.862599 292144 out.go:177] * Using Docker driver with root privileges
I0114 10:46:21.863906 292144 cni.go:95] Creating CNI manager for "calico"
I0114 10:46:21.863937 292144 start_flags.go:314] Found "Calico" CNI - setting NetworkPlugin=cni
I0114 10:46:21.863953 292144 start_flags.go:319] config:
{Name:calico-104157 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-104157 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0114 10:46:21.865782 292144 out.go:177] * Starting control plane node calico-104157 in cluster calico-104157
I0114 10:46:21.867311 292144 cache.go:120] Beginning downloading kic base image for docker with docker
I0114 10:46:21.868891 292144 out.go:177] * Pulling base image ...
I0114 10:46:21.870282 292144 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I0114 10:46:21.870310 292144 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
I0114 10:46:21.870329 292144 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15642-4687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
I0114 10:46:21.870341 292144 cache.go:57] Caching tarball of preloaded images
I0114 10:46:21.870580 292144 preload.go:174] Found /home/jenkins/minikube-integration/15642-4687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0114 10:46:21.870603 292144 cache.go:60] Finished verifying existence of preloaded tar for v1.25.3 on docker
I0114 10:46:21.870728 292144 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/config.json ...
I0114 10:46:21.870758 292144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/config.json: {Name:mk2ad2e170f5676cdd1052408d543c432365fb8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 10:46:21.896484 292144 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
I0114 10:46:21.896517 292144 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
I0114 10:46:21.896536 292144 cache.go:193] Successfully downloaded all kic artifacts
I0114 10:46:21.896577 292144 start.go:364] acquiring machines lock for calico-104157: {Name:mkba88d91fad21f2b52977ad4742049769de866b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0114 10:46:21.896708 292144 start.go:368] acquired machines lock for "calico-104157" in 109.81µs
I0114 10:46:21.896735 292144 start.go:93] Provisioning new machine with config: &{Name:calico-104157 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-104157 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0114 10:46:21.896881 292144 start.go:125] createHost starting for "" (driver="docker")
I0114 10:46:21.900161 292144 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
I0114 10:46:21.900396 292144 start.go:159] libmachine.API.Create for "calico-104157" (driver="docker")
I0114 10:46:21.900428 292144 client.go:168] LocalClient.Create starting
I0114 10:46:21.900496 292144 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15642-4687/.minikube/certs/ca.pem
I0114 10:46:21.900535 292144 main.go:134] libmachine: Decoding PEM data...
I0114 10:46:21.900559 292144 main.go:134] libmachine: Parsing certificate...
I0114 10:46:21.900633 292144 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15642-4687/.minikube/certs/cert.pem
I0114 10:46:21.900658 292144 main.go:134] libmachine: Decoding PEM data...
I0114 10:46:21.900673 292144 main.go:134] libmachine: Parsing certificate...
I0114 10:46:21.901071 292144 cli_runner.go:164] Run: docker network inspect calico-104157 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0114 10:46:21.925030 292144 cli_runner.go:211] docker network inspect calico-104157 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0114 10:46:21.925110 292144 network_create.go:280] running [docker network inspect calico-104157] to gather additional debugging logs...
I0114 10:46:21.925129 292144 cli_runner.go:164] Run: docker network inspect calico-104157
W0114 10:46:21.947907 292144 cli_runner.go:211] docker network inspect calico-104157 returned with exit code 1
I0114 10:46:21.947936 292144 network_create.go:283] error running [docker network inspect calico-104157]: docker network inspect calico-104157: exit status 1
stdout:
[]
stderr:
Error: No such network: calico-104157
I0114 10:46:21.947949 292144 network_create.go:285] output of [docker network inspect calico-104157]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: calico-104157
** /stderr **
I0114 10:46:21.947986 292144 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0114 10:46:21.973191 292144 network.go:215] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-333ce45b82a7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:21:7e:af:e2}}
I0114 10:46:21.974632 292144 network.go:215] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-198f7a014aba IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:20:19:9f:ce}}
I0114 10:46:21.975263 292144 network.go:215] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0b385381ca18 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:7d:2a:8e:d2}}
I0114 10:46:21.976020 292144 network.go:215] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-802e4270146e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:c1:ab:97:12}}
I0114 10:46:21.976737 292144 network.go:215] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-df9cf2e17378 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:d4:01:52:3d}}
I0114 10:46:21.977538 292144 network.go:277] reserving subnet 192.168.94.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.94.0:0xc000014060] misses:0}
I0114 10:46:21.977580 292144 network.go:210] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0114 10:46:21.977591 292144 network_create.go:123] attempt to create docker network calico-104157 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
I0114 10:46:21.977638 292144 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-104157 calico-104157
I0114 10:46:22.043457 292144 network_create.go:107] docker network calico-104157 192.168.94.0/24 created
I0114 10:46:22.043500 292144 kic.go:117] calculated static IP "192.168.94.2" for the "calico-104157" container
I0114 10:46:22.043579 292144 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0114 10:46:22.079594 292144 cli_runner.go:164] Run: docker volume create calico-104157 --label name.minikube.sigs.k8s.io=calico-104157 --label created_by.minikube.sigs.k8s.io=true
I0114 10:46:22.103291 292144 oci.go:103] Successfully created a docker volume calico-104157
I0114 10:46:22.103373 292144 cli_runner.go:164] Run: docker run --rm --name calico-104157-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-104157 --entrypoint /usr/bin/test -v calico-104157:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib
I0114 10:46:22.719644 292144 oci.go:107] Successfully prepared a docker volume calico-104157
I0114 10:46:22.719690 292144 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I0114 10:46:22.719715 292144 kic.go:190] Starting extracting preloaded images to volume ...
I0114 10:46:22.719782 292144 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15642-4687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-104157:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir
I0114 10:46:27.735610 292144 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15642-4687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-104157:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir: (5.015734529s)
I0114 10:46:27.735646 292144 kic.go:199] duration metric: took 5.015928 seconds to extract preloaded images to volume
W0114 10:46:27.735814 292144 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0114 10:46:27.735935 292144 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0114 10:46:27.872944 292144 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-104157 --name calico-104157 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-104157 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-104157 --network calico-104157 --ip 192.168.94.2 --volume calico-104157:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c
I0114 10:46:28.439094 292144 cli_runner.go:164] Run: docker container inspect calico-104157 --format={{.State.Running}}
I0114 10:46:28.472395 292144 cli_runner.go:164] Run: docker container inspect calico-104157 --format={{.State.Status}}
I0114 10:46:28.522222 292144 cli_runner.go:164] Run: docker exec calico-104157 stat /var/lib/dpkg/alternatives/iptables
I0114 10:46:28.615060 292144 oci.go:144] the created container "calico-104157" has a running status.
I0114 10:46:28.615093 292144 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15642-4687/.minikube/machines/calico-104157/id_rsa...
I0114 10:46:28.911365 292144 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15642-4687/.minikube/machines/calico-104157/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0114 10:46:29.023374 292144 cli_runner.go:164] Run: docker container inspect calico-104157 --format={{.State.Status}}
I0114 10:46:29.050825 292144 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0114 10:46:29.050850 292144 kic_runner.go:114] Args: [docker exec --privileged calico-104157 chown docker:docker /home/docker/.ssh/authorized_keys]
I0114 10:46:29.183228 292144 cli_runner.go:164] Run: docker container inspect calico-104157 --format={{.State.Status}}
I0114 10:46:29.209410 292144 machine.go:88] provisioning docker machine ...
I0114 10:46:29.209447 292144 ubuntu.go:169] provisioning hostname "calico-104157"
I0114 10:46:29.209530 292144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-104157
I0114 10:46:29.233406 292144 main.go:134] libmachine: Using SSH client type: native
I0114 10:46:29.233618 292144 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil> [] 0s} 127.0.0.1 33010 <nil> <nil>}
I0114 10:46:29.233639 292144 main.go:134] libmachine: About to run SSH command:
sudo hostname calico-104157 && echo "calico-104157" | sudo tee /etc/hostname
I0114 10:46:29.360771 292144 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-104157
I0114 10:46:29.360885 292144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-104157
I0114 10:46:29.391850 292144 main.go:134] libmachine: Using SSH client type: native
I0114 10:46:29.392014 292144 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil> [] 0s} 127.0.0.1 33010 <nil> <nil>}
I0114 10:46:29.392036 292144 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\scalico-104157' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-104157/g' /etc/hosts;
else
echo '127.0.1.1 calico-104157' | sudo tee -a /etc/hosts;
fi
fi
I0114 10:46:29.516627 292144 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0114 10:46:29.516662 292144 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15642-4687/.minikube CaCertPath:/home/jenkins/minikube-integration/15642-4687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15642-4687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15642-4687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15642-4687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15642-4687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15642-4687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15642-4687/.minikube}
I0114 10:46:29.516700 292144 ubuntu.go:177] setting up certificates
I0114 10:46:29.516710 292144 provision.go:83] configureAuth start
I0114 10:46:29.516771 292144 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-104157
I0114 10:46:29.539230 292144 provision.go:138] copyHostCerts
I0114 10:46:29.539293 292144 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-4687/.minikube/ca.pem, removing ...
I0114 10:46:29.539305 292144 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-4687/.minikube/ca.pem
I0114 10:46:29.539378 292144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-4687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15642-4687/.minikube/ca.pem (1082 bytes)
I0114 10:46:29.539476 292144 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-4687/.minikube/cert.pem, removing ...
I0114 10:46:29.539486 292144 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-4687/.minikube/cert.pem
I0114 10:46:29.539523 292144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-4687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15642-4687/.minikube/cert.pem (1123 bytes)
I0114 10:46:29.539611 292144 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-4687/.minikube/key.pem, removing ...
I0114 10:46:29.539623 292144 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-4687/.minikube/key.pem
I0114 10:46:29.539657 292144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-4687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15642-4687/.minikube/key.pem (1675 bytes)
I0114 10:46:29.539719 292144 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15642-4687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15642-4687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15642-4687/.minikube/certs/ca-key.pem org=jenkins.calico-104157 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube calico-104157]
I0114 10:46:29.719120 292144 provision.go:172] copyRemoteCerts
I0114 10:46:29.719191 292144 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0114 10:46:29.719246 292144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-104157
I0114 10:46:29.742573 292144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33010 SSHKeyPath:/home/jenkins/minikube-integration/15642-4687/.minikube/machines/calico-104157/id_rsa Username:docker}
I0114 10:46:29.832014 292144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-4687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0114 10:46:29.849343 292144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-4687/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
I0114 10:46:29.866161 292144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-4687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0114 10:46:29.886759 292144 provision.go:86] duration metric: configureAuth took 370.032305ms
I0114 10:46:29.886789 292144 ubuntu.go:193] setting minikube options for container-runtime
I0114 10:46:29.886999 292144 config.go:180] Loaded profile config "calico-104157": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0114 10:46:29.887046 292144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-104157
I0114 10:46:29.920953 292144 main.go:134] libmachine: Using SSH client type: native
I0114 10:46:29.921101 292144 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil> [] 0s} 127.0.0.1 33010 <nil> <nil>}
I0114 10:46:29.921116 292144 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0114 10:46:30.040658 292144 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
I0114 10:46:30.040684 292144 ubuntu.go:71] root file system type: overlay
I0114 10:46:30.040884 292144 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0114 10:46:30.040979 292144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-104157
I0114 10:46:30.064053 292144 main.go:134] libmachine: Using SSH client type: native
I0114 10:46:30.064210 292144 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil> [] 0s} 127.0.0.1 33010 <nil> <nil>}
I0114 10:46:30.064292 292144 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0114 10:46:30.197692 292144 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0114 10:46:30.197782 292144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-104157
I0114 10:46:30.228450 292144 main.go:134] libmachine: Using SSH client type: native
I0114 10:46:30.228611 292144 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil> [] 0s} 127.0.0.1 33010 <nil> <nil>}
I0114 10:46:30.228641 292144 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0114 10:46:30.875150 292144 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2022-10-25 18:00:04.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-01-14 10:46:30.189493013 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0114 10:46:30.875190 292144 machine.go:91] provisioned docker machine in 1.665756142s
I0114 10:46:30.875204 292144 client.go:171] LocalClient.Create took 8.974768656s
I0114 10:46:30.875218 292144 start.go:167] duration metric: libmachine.API.Create for "calico-104157" took 8.974820113s
I0114 10:46:30.875227 292144 start.go:300] post-start starting for "calico-104157" (driver="docker")
I0114 10:46:30.875235 292144 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0114 10:46:30.875299 292144 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0114 10:46:30.875353 292144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-104157
I0114 10:46:30.907179 292144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33010 SSHKeyPath:/home/jenkins/minikube-integration/15642-4687/.minikube/machines/calico-104157/id_rsa Username:docker}
I0114 10:46:30.996369 292144 ssh_runner.go:195] Run: cat /etc/os-release
I0114 10:46:30.999297 292144 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0114 10:46:30.999319 292144 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0114 10:46:30.999327 292144 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0114 10:46:30.999333 292144 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0114 10:46:30.999342 292144 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-4687/.minikube/addons for local assets ...
I0114 10:46:30.999394 292144 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-4687/.minikube/files for local assets ...
I0114 10:46:30.999463 292144 filesync.go:149] local asset: /home/jenkins/minikube-integration/15642-4687/.minikube/files/etc/ssl/certs/111712.pem -> 111712.pem in /etc/ssl/certs
I0114 10:46:30.999555 292144 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0114 10:46:31.006375 292144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-4687/.minikube/files/etc/ssl/certs/111712.pem --> /etc/ssl/certs/111712.pem (1708 bytes)
I0114 10:46:31.023946 292144 start.go:303] post-start completed in 148.706629ms
I0114 10:46:31.024254 292144 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-104157
I0114 10:46:31.050063 292144 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/config.json ...
I0114 10:46:31.050330 292144 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0114 10:46:31.050380 292144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-104157
I0114 10:46:31.072175 292144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33010 SSHKeyPath:/home/jenkins/minikube-integration/15642-4687/.minikube/machines/calico-104157/id_rsa Username:docker}
I0114 10:46:31.153131 292144 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0114 10:46:31.156882 292144 start.go:128] duration metric: createHost completed in 9.259990103s
I0114 10:46:31.156902 292144 start.go:83] releasing machines lock for "calico-104157", held for 9.260181683s
I0114 10:46:31.156992 292144 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-104157
I0114 10:46:31.180668 292144 ssh_runner.go:195] Run: cat /version.json
I0114 10:46:31.180719 292144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-104157
I0114 10:46:31.180816 292144 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0114 10:46:31.180894 292144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-104157
I0114 10:46:31.206873 292144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33010 SSHKeyPath:/home/jenkins/minikube-integration/15642-4687/.minikube/machines/calico-104157/id_rsa Username:docker}
I0114 10:46:31.207333 292144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33010 SSHKeyPath:/home/jenkins/minikube-integration/15642-4687/.minikube/machines/calico-104157/id_rsa Username:docker}
I0114 10:46:31.318332 292144 ssh_runner.go:195] Run: systemctl --version
I0114 10:46:31.322448 292144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0114 10:46:31.329648 292144 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
I0114 10:46:31.342798 292144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0114 10:46:31.424608 292144 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I0114 10:46:31.519236 292144 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0114 10:46:31.529360 292144 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0114 10:46:31.529416 292144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0114 10:46:31.538393 292144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0114 10:46:31.550628 292144 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0114 10:46:31.635886 292144 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0114 10:46:31.730315 292144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0114 10:46:31.821673 292144 ssh_runner.go:195] Run: sudo systemctl restart docker
I0114 10:46:32.045028 292144 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0114 10:46:32.136269 292144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0114 10:46:32.216507 292144 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I0114 10:46:32.227120 292144 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0114 10:46:32.227189 292144 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0114 10:46:32.230357 292144 start.go:472] Will wait 60s for crictl version
I0114 10:46:32.230406 292144 ssh_runner.go:195] Run: which crictl
I0114 10:46:32.233381 292144 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0114 10:46:32.265921 292144 start.go:488] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.21
RuntimeApiVersion: 1.41.0
I0114 10:46:32.265972 292144 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0114 10:46:32.296155 292144 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0114 10:46:32.325460 292144 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
I0114 10:46:32.325568 292144 cli_runner.go:164] Run: docker network inspect calico-104157 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0114 10:46:32.348318 292144 ssh_runner.go:195] Run: grep 192.168.94.1 host.minikube.internal$ /etc/hosts
I0114 10:46:32.351378 292144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0114 10:46:32.360859 292144 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I0114 10:46:32.360919 292144 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0114 10:46:32.384760 292144 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0114 10:46:32.384878 292144 docker.go:543] Images already preloaded, skipping extraction
I0114 10:46:32.384941 292144 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0114 10:46:32.407037 292144 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0114 10:46:32.407065 292144 cache_images.go:84] Images are preloaded, skipping loading
I0114 10:46:32.407106 292144 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0114 10:46:32.486223 292144 cni.go:95] Creating CNI manager for "calico"
I0114 10:46:32.486253 292144 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0114 10:46:32.486270 292144 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-104157 NodeName:calico-104157 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
I0114 10:46:32.486426 292144 kubeadm.go:163] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.94.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "calico-104157"
kubeletExtraArgs:
node-ip: 192.168.94.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.25.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0114 10:46:32.486525 292144 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=calico-104157 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.25.3 ClusterName:calico-104157 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
I0114 10:46:32.486579 292144 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
I0114 10:46:32.494504 292144 binaries.go:44] Found k8s binaries, skipping transfer
I0114 10:46:32.494576 292144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0114 10:46:32.502005 292144 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (475 bytes)
I0114 10:46:32.514990 292144 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0114 10:46:32.528745 292144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2036 bytes)
I0114 10:46:32.542077 292144 ssh_runner.go:195] Run: grep 192.168.94.2 control-plane.minikube.internal$ /etc/hosts
I0114 10:46:32.545067 292144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0114 10:46:32.554582 292144 certs.go:54] Setting up /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157 for IP: 192.168.94.2
I0114 10:46:32.554686 292144 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15642-4687/.minikube/ca.key
I0114 10:46:32.554731 292144 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15642-4687/.minikube/proxy-client-ca.key
I0114 10:46:32.554792 292144 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/client.key
I0114 10:46:32.554810 292144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/client.crt with IP's: []
I0114 10:46:32.751174 292144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/client.crt ...
I0114 10:46:32.751212 292144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/client.crt: {Name:mkd2963ca533c6374944bc934f26996b11e58018 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 10:46:32.751451 292144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/client.key ...
I0114 10:46:32.751474 292144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/client.key: {Name:mk2747fb8fe0fc45e45f60d64d6dcfce9ba20a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 10:46:32.751632 292144 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/apiserver.key.ad8e880a
I0114 10:46:32.751653 292144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/apiserver.crt.ad8e880a with IP's: [192.168.94.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0114 10:46:32.930817 292144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/apiserver.crt.ad8e880a ...
I0114 10:46:32.930846 292144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/apiserver.crt.ad8e880a: {Name:mk507074ec9ffa385b156bb33968b5eec7ace6e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 10:46:32.931060 292144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/apiserver.key.ad8e880a ...
I0114 10:46:32.931079 292144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/apiserver.key.ad8e880a: {Name:mk139289987fb20f84809a6d2fa0d555717439aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 10:46:32.931243 292144 certs.go:320] copying /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/apiserver.crt.ad8e880a -> /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/apiserver.crt
I0114 10:46:32.931341 292144 certs.go:324] copying /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/apiserver.key.ad8e880a -> /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/apiserver.key
I0114 10:46:32.931404 292144 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/proxy-client.key
I0114 10:46:32.931428 292144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/proxy-client.crt with IP's: []
I0114 10:46:33.212002 292144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/proxy-client.crt ...
I0114 10:46:33.212028 292144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/proxy-client.crt: {Name:mk8db29080f21c2aa1323e9322d1d921cca9ac6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 10:46:33.212240 292144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/proxy-client.key ...
I0114 10:46:33.212255 292144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/proxy-client.key: {Name:mkfa55813d75c278d8400f40a6d89c2c84c49435 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 10:46:33.212449 292144 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-4687/.minikube/certs/home/jenkins/minikube-integration/15642-4687/.minikube/certs/11171.pem (1338 bytes)
W0114 10:46:33.212490 292144 certs.go:384] ignoring /home/jenkins/minikube-integration/15642-4687/.minikube/certs/home/jenkins/minikube-integration/15642-4687/.minikube/certs/11171_empty.pem, impossibly tiny 0 bytes
I0114 10:46:33.212501 292144 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-4687/.minikube/certs/home/jenkins/minikube-integration/15642-4687/.minikube/certs/ca-key.pem (1679 bytes)
I0114 10:46:33.212523 292144 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-4687/.minikube/certs/home/jenkins/minikube-integration/15642-4687/.minikube/certs/ca.pem (1082 bytes)
I0114 10:46:33.212546 292144 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-4687/.minikube/certs/home/jenkins/minikube-integration/15642-4687/.minikube/certs/cert.pem (1123 bytes)
I0114 10:46:33.212566 292144 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-4687/.minikube/certs/home/jenkins/minikube-integration/15642-4687/.minikube/certs/key.pem (1675 bytes)
I0114 10:46:33.212612 292144 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-4687/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15642-4687/.minikube/files/etc/ssl/certs/111712.pem (1708 bytes)
I0114 10:46:33.213222 292144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0114 10:46:33.231228 292144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0114 10:46:33.248486 292144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0114 10:46:33.265813 292144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-4687/.minikube/profiles/calico-104157/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0114 10:46:33.283027 292144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-4687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0114 10:46:33.299939 292144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-4687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0114 10:46:33.316763 292144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-4687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0114 10:46:33.333726 292144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-4687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0114 10:46:33.350240 292144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-4687/.minikube/certs/11171.pem --> /usr/share/ca-certificates/11171.pem (1338 bytes)
I0114 10:46:33.367643 292144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-4687/.minikube/files/etc/ssl/certs/111712.pem --> /usr/share/ca-certificates/111712.pem (1708 bytes)
I0114 10:46:33.385406 292144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-4687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0114 10:46:33.403032 292144 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0114 10:46:33.415178 292144 ssh_runner.go:195] Run: openssl version
I0114 10:46:33.419811 292144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111712.pem && ln -fs /usr/share/ca-certificates/111712.pem /etc/ssl/certs/111712.pem"
I0114 10:46:33.427192 292144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111712.pem
I0114 10:46:33.430245 292144 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:18 /usr/share/ca-certificates/111712.pem
I0114 10:46:33.430298 292144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111712.pem
I0114 10:46:33.435006 292144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111712.pem /etc/ssl/certs/3ec20f2e.0"
I0114 10:46:33.442194 292144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0114 10:46:33.450428 292144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0114 10:46:33.453392 292144 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:07 /usr/share/ca-certificates/minikubeCA.pem
I0114 10:46:33.453435 292144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0114 10:46:33.458063 292144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0114 10:46:33.465839 292144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11171.pem && ln -fs /usr/share/ca-certificates/11171.pem /etc/ssl/certs/11171.pem"
I0114 10:46:33.476082 292144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11171.pem
I0114 10:46:33.479827 292144 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:18 /usr/share/ca-certificates/11171.pem
I0114 10:46:33.479877 292144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11171.pem
I0114 10:46:33.485218 292144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11171.pem /etc/ssl/certs/51391683.0"
I0114 10:46:33.492438 292144 kubeadm.go:396] StartCluster: {Name:calico-104157 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-104157 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0114 10:46:33.492540 292144 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0114 10:46:33.513110 292144 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0114 10:46:33.519978 292144 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0114 10:46:33.526642 292144 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0114 10:46:33.526679 292144 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0114 10:46:33.533193 292144 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0114 10:46:33.533235 292144 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0114 10:46:33.573873 292144 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
I0114 10:46:33.573919 292144 kubeadm.go:317] [preflight] Running pre-flight checks
I0114 10:46:33.608210 292144 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
I0114 10:46:33.608296 292144 kubeadm.go:317] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
I0114 10:46:33.608374 292144 kubeadm.go:317] [0;37mOS[0m: [0;32mLinux[0m
I0114 10:46:33.608493 292144 kubeadm.go:317] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0114 10:46:33.608563 292144 kubeadm.go:317] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0114 10:46:33.608621 292144 kubeadm.go:317] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0114 10:46:33.608679 292144 kubeadm.go:317] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0114 10:46:33.608736 292144 kubeadm.go:317] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0114 10:46:33.608812 292144 kubeadm.go:317] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0114 10:46:33.608872 292144 kubeadm.go:317] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0114 10:46:33.608947 292144 kubeadm.go:317] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0114 10:46:33.609008 292144 kubeadm.go:317] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0114 10:46:33.674836 292144 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I0114 10:46:33.674990 292144 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0114 10:46:33.675159 292144 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0114 10:46:33.839604 292144 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0114 10:46:33.842683 292144 out.go:204] - Generating certificates and keys ...
I0114 10:46:33.842861 292144 kubeadm.go:317] [certs] Using existing ca certificate authority
I0114 10:46:33.842958 292144 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
I0114 10:46:34.088885 292144 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
I0114 10:46:34.344073 292144 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
I0114 10:46:34.423269 292144 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
I0114 10:46:34.810104 292144 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
I0114 10:46:34.898179 292144 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
I0114 10:46:34.898315 292144 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [calico-104157 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
I0114 10:46:35.019817 292144 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
I0114 10:46:35.019994 292144 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [calico-104157 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
I0114 10:46:35.259650 292144 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
I0114 10:46:35.324737 292144 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
I0114 10:46:36.166655 292144 kubeadm.go:317] [certs] Generating "sa" key and public key
I0114 10:46:36.166859 292144 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0114 10:46:36.247587 292144 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
I0114 10:46:36.492443 292144 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0114 10:46:36.579954 292144 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0114 10:46:36.628259 292144 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0114 10:46:36.640371 292144 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0114 10:46:36.641664 292144 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0114 10:46:36.641840 292144 kubeadm.go:317] [kubelet-start] Starting the kubelet
I0114 10:46:36.731820 292144 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0114 10:46:36.734027 292144 out.go:204] - Booting up control plane ...
I0114 10:46:36.734145 292144 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0114 10:46:36.736797 292144 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0114 10:46:36.738118 292144 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0114 10:46:36.740485 292144 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0114 10:46:36.743338 292144 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0114 10:46:45.246795 292144 kubeadm.go:317] [apiclient] All control plane components are healthy after 8.503418 seconds
I0114 10:46:45.246939 292144 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0114 10:46:45.255615 292144 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0114 10:46:45.773124 292144 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
I0114 10:46:45.773352 292144 kubeadm.go:317] [mark-control-plane] Marking the node calico-104157 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0114 10:46:46.283120 292144 kubeadm.go:317] [bootstrap-token] Using token: ii41lw.n6eo98nyzvpvlg7t
I0114 10:46:46.284731 292144 out.go:204] - Configuring RBAC rules ...
I0114 10:46:46.284902 292144 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0114 10:46:46.287617 292144 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0114 10:46:46.292715 292144 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0114 10:46:46.294765 292144 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0114 10:46:46.296746 292144 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0114 10:46:46.298644 292144 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0114 10:46:46.306253 292144 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0114 10:46:46.567348 292144 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
I0114 10:46:46.691219 292144 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
I0114 10:46:46.698230 292144 kubeadm.go:317]
I0114 10:46:46.698319 292144 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
I0114 10:46:46.698329 292144 kubeadm.go:317]
I0114 10:46:46.698421 292144 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
I0114 10:46:46.698431 292144 kubeadm.go:317]
I0114 10:46:46.698465 292144 kubeadm.go:317] mkdir -p $HOME/.kube
I0114 10:46:46.698578 292144 kubeadm.go:317] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0114 10:46:46.698641 292144 kubeadm.go:317] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0114 10:46:46.698648 292144 kubeadm.go:317]
I0114 10:46:46.698707 292144 kubeadm.go:317] Alternatively, if you are the root user, you can run:
I0114 10:46:46.698713 292144 kubeadm.go:317]
I0114 10:46:46.698764 292144 kubeadm.go:317] export KUBECONFIG=/etc/kubernetes/admin.conf
I0114 10:46:46.698770 292144 kubeadm.go:317]
I0114 10:46:46.698830 292144 kubeadm.go:317] You should now deploy a pod network to the cluster.
I0114 10:46:46.698922 292144 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0114 10:46:46.699005 292144 kubeadm.go:317] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0114 10:46:46.699011 292144 kubeadm.go:317]
I0114 10:46:46.699106 292144 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
I0114 10:46:46.699214 292144 kubeadm.go:317] and service account keys on each node and then running the following as root:
I0114 10:46:46.699220 292144 kubeadm.go:317]
I0114 10:46:46.699320 292144 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token ii41lw.n6eo98nyzvpvlg7t \
I0114 10:46:46.699442 292144 kubeadm.go:317] --discovery-token-ca-cert-hash sha256:b22f90ce03aee56a5fd9cd44cd1d56d7f6348dfd5d62ec4df390ac49efb4561d \
I0114 10:46:46.699724 292144 kubeadm.go:317] --control-plane
I0114 10:46:46.699743 292144 kubeadm.go:317]
I0114 10:46:46.699843 292144 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
I0114 10:46:46.699849 292144 kubeadm.go:317]
I0114 10:46:46.699946 292144 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token ii41lw.n6eo98nyzvpvlg7t \
I0114 10:46:46.700062 292144 kubeadm.go:317] --discovery-token-ca-cert-hash sha256:b22f90ce03aee56a5fd9cd44cd1d56d7f6348dfd5d62ec4df390ac49efb4561d
I0114 10:46:46.706705 292144 kubeadm.go:317] W0114 10:46:33.566847 1191 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0114 10:46:46.706978 292144 kubeadm.go:317] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
I0114 10:46:46.707110 292144 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0114 10:46:46.707139 292144 cni.go:95] Creating CNI manager for "calico"
I0114 10:46:46.709487 292144 out.go:177] * Configuring Calico (Container Networking Interface) ...
I0114 10:46:46.711167 292144 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
I0114 10:46:46.711186 292144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202045 bytes)
I0114 10:46:46.731807 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0114 10:46:48.385514 292144 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.653675338s)
I0114 10:46:48.385558 292144 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0114 10:46:48.385681 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:48.385748 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81 minikube.k8s.io/name=calico-104157 minikube.k8s.io/updated_at=2023_01_14T10_46_48_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:48.484132 292144 ops.go:34] apiserver oom_adj: -16
I0114 10:46:48.484704 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:49.089601 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:49.589941 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:50.089488 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:50.589067 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:51.090000 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:51.589715 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:52.089092 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:52.589101 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:53.089922 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:53.589627 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:54.089971 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:54.589997 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:55.090034 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:55.589070 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:56.089069 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:56.589961 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:57.089341 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:57.589087 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:58.089083 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:58.589681 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:59.089601 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:59.589064 292144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:46:59.933686 292144 kubeadm.go:1067] duration metric: took 11.548054834s to wait for elevateKubeSystemPrivileges.
I0114 10:46:59.933727 292144 kubeadm.go:398] StartCluster complete in 26.441296467s
I0114 10:46:59.933750 292144 settings.go:142] acquiring lock: {Name:mk83cbb48d204eeeb809b03eddeb625ad84ebdf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 10:46:59.933860 292144 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15642-4687/kubeconfig
I0114 10:46:59.935881 292144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-4687/kubeconfig: {Name:mkfe9a6ae9ce000cecb12a8a9bfe1b38074fe4cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 10:47:00.488533 292144 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-104157" rescaled to 1
I0114 10:47:00.488591 292144 start.go:212] Will wait 5m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0114 10:47:00.490631 292144 out.go:177] * Verifying Kubernetes components...
I0114 10:47:00.488761 292144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0114 10:47:00.489008 292144 config.go:180] Loaded profile config "calico-104157": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0114 10:47:00.489026 292144 addons.go:486] enableAddons start: toEnable=map[], additional=[]
I0114 10:47:00.492207 292144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0114 10:47:00.492214 292144 addons.go:65] Setting storage-provisioner=true in profile "calico-104157"
I0114 10:47:00.492233 292144 addons.go:227] Setting addon storage-provisioner=true in "calico-104157"
W0114 10:47:00.492249 292144 addons.go:236] addon storage-provisioner should already be in state true
I0114 10:47:00.492312 292144 host.go:66] Checking if "calico-104157" exists ...
I0114 10:47:00.492383 292144 addons.go:65] Setting default-storageclass=true in profile "calico-104157"
I0114 10:47:00.492404 292144 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-104157"
I0114 10:47:00.492711 292144 cli_runner.go:164] Run: docker container inspect calico-104157 --format={{.State.Status}}
I0114 10:47:00.492873 292144 cli_runner.go:164] Run: docker container inspect calico-104157 --format={{.State.Status}}
I0114 10:47:00.535702 292144 addons.go:227] Setting addon default-storageclass=true in "calico-104157"
W0114 10:47:00.535727 292144 addons.go:236] addon default-storageclass should already be in state true
I0114 10:47:00.535756 292144 host.go:66] Checking if "calico-104157" exists ...
I0114 10:47:00.536193 292144 cli_runner.go:164] Run: docker container inspect calico-104157 --format={{.State.Status}}
I0114 10:47:00.546315 292144 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0114 10:47:00.553165 292144 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0114 10:47:00.553192 292144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0114 10:47:00.553252 292144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-104157
I0114 10:47:00.585283 292144 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0114 10:47:00.585307 292144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0114 10:47:00.585359 292144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-104157
I0114 10:47:00.599299 292144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33010 SSHKeyPath:/home/jenkins/minikube-integration/15642-4687/.minikube/machines/calico-104157/id_rsa Username:docker}
I0114 10:47:00.625506 292144 node_ready.go:35] waiting up to 5m0s for node "calico-104157" to be "Ready" ...
I0114 10:47:00.625839 292144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.94.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0114 10:47:00.628110 292144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33010 SSHKeyPath:/home/jenkins/minikube-integration/15642-4687/.minikube/machines/calico-104157/id_rsa Username:docker}
I0114 10:47:00.629979 292144 node_ready.go:49] node "calico-104157" has status "Ready":"True"
I0114 10:47:00.629990 292144 node_ready.go:38] duration metric: took 4.453314ms waiting for node "calico-104157" to be "Ready" ...
I0114 10:47:00.629999 292144 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0114 10:47:00.640381 292144 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace to be "Ready" ...
I0114 10:47:00.707954 292144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0114 10:47:00.798835 292144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0114 10:47:02.068279 292144 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.94.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.442402991s)
I0114 10:47:02.068310 292144 start.go:833] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
I0114 10:47:02.273098 292144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.565111139s)
I0114 10:47:02.273160 292144 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.474245976s)
I0114 10:47:02.275435 292144 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0114 10:47:02.276983 292144 addons.go:488] enableAddons completed in 1.787965553s
I0114 10:47:02.673610 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:05.174114 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:07.648737 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:09.672579 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:12.149752 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:14.653324 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:16.672837 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:19.149027 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:21.149575 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:23.149822 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:25.150122 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:27.648999 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:29.649679 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:32.149102 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:34.149231 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:36.648745 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:38.650287 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:41.148939 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:43.149048 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:45.649256 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:48.150387 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:50.649913 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:52.652472 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:55.149378 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:57.149483 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:47:59.649882 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:02.149359 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:04.150286 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:06.649532 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:09.149163 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:11.150679 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:13.650050 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:16.148555 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:18.148909 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:20.649138 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:22.649359 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:25.148888 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:27.150366 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:29.648728 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:31.649598 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:34.149136 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:36.648692 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:38.648754 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:40.650564 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:43.149736 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:45.649598 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:48.148659 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:50.648605 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:52.649166 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:55.148516 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:57.648290 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:48:59.649545 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:02.148715 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:04.648193 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:06.648694 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:08.648968 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:10.649547 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:12.650092 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:15.148650 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:17.149388 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:19.649423 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:22.148645 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:24.149495 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:26.649690 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:29.148684 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:31.649224 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:33.649861 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:36.149237 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:38.649043 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:40.650385 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:43.149021 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:45.648731 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:47.649870 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:50.148918 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:52.149835 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:54.650152 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:57.148492 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:49:59.149303 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:01.149359 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:03.649304 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:06.148828 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:08.648686 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:10.648741 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:13.148686 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:15.648736 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:17.648951 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:20.149265 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:22.648652 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:24.648830 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:26.649356 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:29.148248 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:31.149641 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:33.648856 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:36.148607 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:38.148638 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:40.148977 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:42.648701 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:44.648836 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:46.649414 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:49.149233 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:51.649296 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:54.148680 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:56.648464 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:50:58.648853 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:00.651839 292144 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:00.651863 292144 pod_ready.go:81] duration metric: took 4m0.011403755s waiting for pod "calico-kube-controllers-7df895d496-56gs9" in "kube-system" namespace to be "Ready" ...
E0114 10:51:00.651871 292144 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
I0114 10:51:00.651880 292144 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-flmfj" in "kube-system" namespace to be "Ready" ...
I0114 10:51:02.662727 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:05.163054 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:07.662465 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:10.162856 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:12.661809 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:14.661986 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:16.662587 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:19.162268 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:21.662886 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:24.162007 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:26.662314 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:28.662358 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:31.161950 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:33.662458 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:36.162328 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:38.662766 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:41.162824 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:43.662306 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:45.662624 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:48.161646 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:50.162176 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:52.662447 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:55.162158 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:57.162478 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:51:59.663772 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:02.161230 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:04.162605 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:06.662190 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:08.662466 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:11.161806 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:13.162286 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:15.162327 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:17.662062 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:20.163863 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:22.662265 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:25.162508 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:27.162707 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:29.162914 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:31.662422 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:33.662471 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:36.162337 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:38.662245 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:40.662821 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:43.163041 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:45.662493 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:48.162639 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:50.163614 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:52.663099 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:54.663142 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:57.162835 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:52:59.163483 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:01.662618 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:03.662818 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:06.161838 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:08.161902 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:10.162606 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:12.162913 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:14.662554 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:17.161970 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:19.662217 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:21.662309 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:23.662972 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:26.161773 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:28.162045 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:30.164375 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:32.662842 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:34.663118 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:37.162073 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:39.662611 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:42.162840 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:44.662627 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:47.162177 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:49.164627 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:51.662191 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:53.662365 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:56.162770 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:53:58.662057 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:01.162344 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:03.662051 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:05.662085 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:07.662184 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:09.662531 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:12.165693 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:14.662208 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:17.162099 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:19.662207 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:21.662559 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:24.162309 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:26.661902 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:28.662545 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:30.662606 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:33.161866 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:35.662258 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:38.163035 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:40.661836 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:42.662632 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:44.663336 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:47.162600 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:49.661797 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:51.663054 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:54.161912 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:56.162345 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:54:58.162979 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:55:00.662862 292144 pod_ready.go:102] pod "calico-node-flmfj" in "kube-system" namespace has status "Ready":"False"
I0114 10:55:00.667844 292144 pod_ready.go:81] duration metric: took 4m0.015954647s waiting for pod "calico-node-flmfj" in "kube-system" namespace to be "Ready" ...
E0114 10:55:00.667864 292144 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
I0114 10:55:00.667878 292144 pod_ready.go:38] duration metric: took 8m0.037870753s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0114 10:55:00.670863 292144 out.go:177]
W0114 10:55:00.672520 292144 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
W0114 10:55:00.672541 292144 out.go:239] *
*
W0114 10:55:00.673843 292144 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0114 10:55:00.675657 292144 out.go:177]
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (519.22s)