=== RUN TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run: out/minikube-linux-amd64 start -p calico-231218 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker --container-runtime=docker
E1101 23:17:56.461949 10122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/functional-225020/client.crt: no such file or directory
=== CONT TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-231218 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker --container-runtime=docker: exit status 80 (8m40.906009129s)
-- stdout --
* [calico-231218] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=15232
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/15232-3679/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3679/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting control plane node calico-231218 in cluster calico-231218
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2048MB) ...
* Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring Calico (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: default-storageclass, storage-provisioner
-- /stdout --
** stderr **
I1101 23:17:48.020598 286969 out.go:296] Setting OutFile to fd 1 ...
I1101 23:17:48.020759 286969 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 23:17:48.020769 286969 out.go:309] Setting ErrFile to fd 2...
I1101 23:17:48.020781 286969 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 23:17:48.020877 286969 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-3679/.minikube/bin
I1101 23:17:48.021463 286969 out.go:303] Setting JSON to false
I1101 23:17:48.023753 286969 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3618,"bootTime":1667341050,"procs":1491,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1101 23:17:48.023823 286969 start.go:126] virtualization: kvm guest
I1101 23:17:48.026678 286969 out.go:177] * [calico-231218] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
I1101 23:17:48.028336 286969 out.go:177] - MINIKUBE_LOCATION=15232
I1101 23:17:48.028307 286969 notify.go:220] Checking for updates...
I1101 23:17:48.031365 286969 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1101 23:17:48.032847 286969 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15232-3679/kubeconfig
I1101 23:17:48.034542 286969 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3679/.minikube
I1101 23:17:48.036106 286969 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1101 23:17:48.037951 286969 config.go:180] Loaded profile config "cilium-231218": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1101 23:17:48.038045 286969 config.go:180] Loaded profile config "kindnet-231218": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1101 23:17:48.038117 286969 config.go:180] Loaded profile config "kubernetes-upgrade-231403": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1101 23:17:48.038160 286969 driver.go:365] Setting default libvirt URI to qemu:///system
I1101 23:17:48.073642 286969 docker.go:137] docker version: linux-20.10.21
I1101 23:17:48.073729 286969 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1101 23:17:48.184081 286969 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-11-01 23:17:48.097990549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1101 23:17:48.184219 286969 docker.go:254] overlay module found
I1101 23:17:48.186865 286969 out.go:177] * Using the docker driver based on user configuration
I1101 23:17:48.188459 286969 start.go:282] selected driver: docker
I1101 23:17:48.188480 286969 start.go:808] validating driver "docker" against <nil>
I1101 23:17:48.188498 286969 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1101 23:17:48.189332 286969 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1101 23:17:48.292175 286969 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-11-01 23:17:48.213690647 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1101 23:17:48.292288 286969 start_flags.go:303] no existing cluster config was found, will generate one from the flags
I1101 23:17:48.292440 286969 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1101 23:17:48.294720 286969 out.go:177] * Using Docker driver with root privileges
I1101 23:17:48.296289 286969 cni.go:95] Creating CNI manager for "calico"
I1101 23:17:48.296307 286969 start_flags.go:312] Found "Calico" CNI - setting NetworkPlugin=cni
I1101 23:17:48.296325 286969 start_flags.go:317] config:
{Name:calico-231218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-231218 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1101 23:17:48.298187 286969 out.go:177] * Starting control plane node calico-231218 in cluster calico-231218
I1101 23:17:48.299894 286969 cache.go:120] Beginning downloading kic base image for docker with docker
I1101 23:17:48.301499 286969 out.go:177] * Pulling base image ...
I1101 23:17:48.303089 286969 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I1101 23:17:48.303120 286969 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
I1101 23:17:48.303125 286969 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15232-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
I1101 23:17:48.303173 286969 cache.go:57] Caching tarball of preloaded images
I1101 23:17:48.303411 286969 preload.go:174] Found /home/jenkins/minikube-integration/15232-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1101 23:17:48.303428 286969 cache.go:60] Finished verifying existence of preloaded tar for v1.25.3 on docker
I1101 23:17:48.303529 286969 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/config.json ...
I1101 23:17:48.303548 286969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/config.json: {Name:mk3f6a3c25aba7013cc063ed30572c9bd2c3ee60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 23:17:48.328263 286969 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
I1101 23:17:48.328287 286969 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
I1101 23:17:48.328296 286969 cache.go:208] Successfully downloaded all kic artifacts
I1101 23:17:48.328330 286969 start.go:364] acquiring machines lock for calico-231218: {Name:mk081f3d154348c5959e652dce31a7b2bbde560e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1101 23:17:48.328448 286969 start.go:368] acquired machines lock for "calico-231218" in 99.319µs
I1101 23:17:48.328470 286969 start.go:93] Provisioning new machine with config: &{Name:calico-231218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-231218 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I1101 23:17:48.328553 286969 start.go:125] createHost starting for "" (driver="docker")
I1101 23:17:48.331086 286969 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
I1101 23:17:48.331359 286969 start.go:159] libmachine.API.Create for "calico-231218" (driver="docker")
I1101 23:17:48.331391 286969 client.go:168] LocalClient.Create starting
I1101 23:17:48.331518 286969 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15232-3679/.minikube/certs/ca.pem
I1101 23:17:48.331556 286969 main.go:134] libmachine: Decoding PEM data...
I1101 23:17:48.331572 286969 main.go:134] libmachine: Parsing certificate...
I1101 23:17:48.331623 286969 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15232-3679/.minikube/certs/cert.pem
I1101 23:17:48.331642 286969 main.go:134] libmachine: Decoding PEM data...
I1101 23:17:48.331652 286969 main.go:134] libmachine: Parsing certificate...
I1101 23:17:48.331978 286969 cli_runner.go:164] Run: docker network inspect calico-231218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1101 23:17:48.354734 286969 cli_runner.go:211] docker network inspect calico-231218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1101 23:17:48.354799 286969 network_create.go:272] running [docker network inspect calico-231218] to gather additional debugging logs...
I1101 23:17:48.354820 286969 cli_runner.go:164] Run: docker network inspect calico-231218
W1101 23:17:48.376636 286969 cli_runner.go:211] docker network inspect calico-231218 returned with exit code 1
I1101 23:17:48.376666 286969 network_create.go:275] error running [docker network inspect calico-231218]: docker network inspect calico-231218: exit status 1
stdout:
[]
stderr:
Error: No such network: calico-231218
I1101 23:17:48.376676 286969 network_create.go:277] output of [docker network inspect calico-231218]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: calico-231218
** /stderr **
I1101 23:17:48.376721 286969 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1101 23:17:48.399893 286969 network.go:246] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-56ece1181dca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ab:28:5c:ef}}
I1101 23:17:48.400585 286969 network.go:246] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-b1ce674c697e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:2d:b6:91:7e}}
I1101 23:17:48.401267 286969 network.go:246] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-6b8a4a69a897 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:8b:e6:a0:3e}}
I1101 23:17:48.401953 286969 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc000594530] misses:0}
I1101 23:17:48.401986 286969 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1101 23:17:48.402001 286969 network_create.go:115] attempt to create docker network calico-231218 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I1101 23:17:48.402050 286969 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-231218 calico-231218
I1101 23:17:48.473504 286969 network_create.go:99] docker network calico-231218 192.168.76.0/24 created
I1101 23:17:48.473538 286969 kic.go:106] calculated static IP "192.168.76.2" for the "calico-231218" container
I1101 23:17:48.473598 286969 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1101 23:17:48.504436 286969 cli_runner.go:164] Run: docker volume create calico-231218 --label name.minikube.sigs.k8s.io=calico-231218 --label created_by.minikube.sigs.k8s.io=true
I1101 23:17:48.528788 286969 oci.go:103] Successfully created a docker volume calico-231218
I1101 23:17:48.528864 286969 cli_runner.go:164] Run: docker run --rm --name calico-231218-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-231218 --entrypoint /usr/bin/test -v calico-231218:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
I1101 23:17:49.094572 286969 oci.go:107] Successfully prepared a docker volume calico-231218
I1101 23:17:49.094616 286969 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I1101 23:17:49.094633 286969 kic.go:179] Starting extracting preloaded images to volume ...
I1101 23:17:49.094683 286969 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15232-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-231218:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
I1101 23:17:53.005665 286969 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15232-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-231218:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (3.91093506s)
I1101 23:17:53.005709 286969 kic.go:188] duration metric: took 3.911070 seconds to extract preloaded images to volume
W1101 23:17:53.005867 286969 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1101 23:17:53.006047 286969 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1101 23:17:53.122167 286969 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-231218 --name calico-231218 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-231218 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-231218 --network calico-231218 --ip 192.168.76.2 --volume calico-231218:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
I1101 23:17:53.551598 286969 cli_runner.go:164] Run: docker container inspect calico-231218 --format={{.State.Running}}
I1101 23:17:53.582690 286969 cli_runner.go:164] Run: docker container inspect calico-231218 --format={{.State.Status}}
I1101 23:17:53.607528 286969 cli_runner.go:164] Run: docker exec calico-231218 stat /var/lib/dpkg/alternatives/iptables
I1101 23:17:53.658660 286969 oci.go:144] the created container "calico-231218" has a running status.
I1101 23:17:53.658695 286969 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/15232-3679/.minikube/machines/calico-231218/id_rsa...
I1101 23:17:53.746468 286969 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15232-3679/.minikube/machines/calico-231218/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1101 23:17:53.821354 286969 cli_runner.go:164] Run: docker container inspect calico-231218 --format={{.State.Status}}
I1101 23:17:53.851047 286969 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1101 23:17:53.851076 286969 kic_runner.go:114] Args: [docker exec --privileged calico-231218 chown docker:docker /home/docker/.ssh/authorized_keys]
I1101 23:17:53.923299 286969 cli_runner.go:164] Run: docker container inspect calico-231218 --format={{.State.Status}}
I1101 23:17:53.954267 286969 machine.go:88] provisioning docker machine ...
I1101 23:17:53.954313 286969 ubuntu.go:169] provisioning hostname "calico-231218"
I1101 23:17:53.954366 286969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-231218
I1101 23:17:53.985024 286969 main.go:134] libmachine: Using SSH client type: native
I1101 23:17:53.985259 286969 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49389 <nil> <nil>}
I1101 23:17:53.985286 286969 main.go:134] libmachine: About to run SSH command:
sudo hostname calico-231218 && echo "calico-231218" | sudo tee /etc/hostname
I1101 23:17:53.986008 286969 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43596->127.0.0.1:49389: read: connection reset by peer
I1101 23:17:57.112206 286969 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-231218
I1101 23:17:57.112272 286969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-231218
I1101 23:17:57.136825 286969 main.go:134] libmachine: Using SSH client type: native
I1101 23:17:57.137004 286969 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49389 <nil> <nil>}
I1101 23:17:57.137030 286969 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\scalico-231218' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-231218/g' /etc/hosts;
else
echo '127.0.1.1 calico-231218' | sudo tee -a /etc/hosts;
fi
fi
I1101 23:17:57.254737 286969 main.go:134] libmachine: SSH cmd err, output: <nil>:
I1101 23:17:57.254764 286969 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15232-3679/.minikube CaCertPath:/home/jenkins/minikube-integration/15232-3679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15232-3679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15232-3679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15232-3679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15232-3679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15232-3679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15232-3679/.minikube}
I1101 23:17:57.254802 286969 ubuntu.go:177] setting up certificates
I1101 23:17:57.254812 286969 provision.go:83] configureAuth start
I1101 23:17:57.254858 286969 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-231218
I1101 23:17:57.280555 286969 provision.go:138] copyHostCerts
I1101 23:17:57.280615 286969 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3679/.minikube/key.pem, removing ...
I1101 23:17:57.280626 286969 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3679/.minikube/key.pem
I1101 23:17:57.280689 286969 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15232-3679/.minikube/key.pem (1679 bytes)
I1101 23:17:57.280759 286969 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3679/.minikube/ca.pem, removing ...
I1101 23:17:57.280768 286969 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3679/.minikube/ca.pem
I1101 23:17:57.280794 286969 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15232-3679/.minikube/ca.pem (1078 bytes)
I1101 23:17:57.281107 286969 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3679/.minikube/cert.pem, removing ...
I1101 23:17:57.281187 286969 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3679/.minikube/cert.pem
I1101 23:17:57.281257 286969 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15232-3679/.minikube/cert.pem (1123 bytes)
I1101 23:17:57.281359 286969 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15232-3679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15232-3679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15232-3679/.minikube/certs/ca-key.pem org=jenkins.calico-231218 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube calico-231218]
I1101 23:17:57.565590 286969 provision.go:172] copyRemoteCerts
I1101 23:17:57.565648 286969 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1101 23:17:57.565690 286969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-231218
I1101 23:17:57.592157 286969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/15232-3679/.minikube/machines/calico-231218/id_rsa Username:docker}
I1101 23:17:57.674504 286969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1101 23:17:57.691510 286969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3679/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
I1101 23:17:57.708256 286969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1101 23:17:57.724672 286969 provision.go:86] duration metric: configureAuth took 469.849254ms
I1101 23:17:57.724699 286969 ubuntu.go:193] setting minikube options for container-runtime
I1101 23:17:57.724851 286969 config.go:180] Loaded profile config "calico-231218": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1101 23:17:57.724897 286969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-231218
I1101 23:17:57.749808 286969 main.go:134] libmachine: Using SSH client type: native
I1101 23:17:57.749955 286969 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49389 <nil> <nil>}
I1101 23:17:57.749968 286969 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1101 23:17:57.863113 286969 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
I1101 23:17:57.863137 286969 ubuntu.go:71] root file system type: overlay
I1101 23:17:57.863386 286969 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1101 23:17:57.863451 286969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-231218
I1101 23:17:57.886927 286969 main.go:134] libmachine: Using SSH client type: native
I1101 23:17:57.887081 286969 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49389 <nil> <nil>}
I1101 23:17:57.887164 286969 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1101 23:17:58.012381 286969 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1101 23:17:58.012447 286969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-231218
I1101 23:17:58.039728 286969 main.go:134] libmachine: Using SSH client type: native
I1101 23:17:58.039869 286969 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49389 <nil> <nil>}
I1101 23:17:58.039892 286969 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1101 23:17:58.715718 286969 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2022-10-18 18:18:12.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2022-11-01 23:17:58.005073358 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I1101 23:17:58.715756 286969 machine.go:91] provisioned docker machine in 4.761460167s
I1101 23:17:58.715767 286969 client.go:171] LocalClient.Create took 10.384370578s
I1101 23:17:58.715788 286969 start.go:167] duration metric: libmachine.API.Create for "calico-231218" took 10.38443121s
I1101 23:17:58.715803 286969 start.go:300] post-start starting for "calico-231218" (driver="docker")
I1101 23:17:58.715817 286969 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1101 23:17:58.715887 286969 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1101 23:17:58.715952 286969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-231218
I1101 23:17:58.741647 286969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/15232-3679/.minikube/machines/calico-231218/id_rsa Username:docker}
I1101 23:17:58.830727 286969 ssh_runner.go:195] Run: cat /etc/os-release
I1101 23:17:58.833756 286969 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1101 23:17:58.833778 286969 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1101 23:17:58.833790 286969 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1101 23:17:58.833797 286969 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I1101 23:17:58.833807 286969 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-3679/.minikube/addons for local assets ...
I1101 23:17:58.833876 286969 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-3679/.minikube/files for local assets ...
I1101 23:17:58.833973 286969 filesync.go:149] local asset: /home/jenkins/minikube-integration/15232-3679/.minikube/files/etc/ssl/certs/101222.pem -> 101222.pem in /etc/ssl/certs
I1101 23:17:58.834095 286969 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1101 23:17:58.840608 286969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3679/.minikube/files/etc/ssl/certs/101222.pem --> /etc/ssl/certs/101222.pem (1708 bytes)
I1101 23:17:58.857601 286969 start.go:303] post-start completed in 141.781815ms
I1101 23:17:58.857910 286969 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-231218
I1101 23:17:58.882413 286969 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/config.json ...
I1101 23:17:58.882629 286969 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1101 23:17:58.882665 286969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-231218
I1101 23:17:58.906172 286969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/15232-3679/.minikube/machines/calico-231218/id_rsa Username:docker}
I1101 23:17:58.987419 286969 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1101 23:17:58.991178 286969 start.go:128] duration metric: createHost completed in 10.662614071s
I1101 23:17:58.991203 286969 start.go:83] releasing machines lock for "calico-231218", held for 10.662742889s
I1101 23:17:58.991318 286969 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-231218
I1101 23:17:59.016536 286969 ssh_runner.go:195] Run: systemctl --version
I1101 23:17:59.016587 286969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-231218
I1101 23:17:59.016617 286969 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1101 23:17:59.016684 286969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-231218
I1101 23:17:59.041529 286969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/15232-3679/.minikube/machines/calico-231218/id_rsa Username:docker}
I1101 23:17:59.041533 286969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/15232-3679/.minikube/machines/calico-231218/id_rsa Username:docker}
I1101 23:17:59.156882 286969 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1101 23:17:59.164390 286969 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
I1101 23:17:59.177893 286969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 23:17:59.255675 286969 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I1101 23:17:59.336616 286969 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1101 23:17:59.346857 286969 cruntime.go:273] skipping containerd shutdown because we are bound to it
I1101 23:17:59.346929 286969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1101 23:17:59.356430 286969 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1101 23:17:59.368778 286969 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1101 23:17:59.452832 286969 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1101 23:17:59.532706 286969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 23:17:59.608122 286969 ssh_runner.go:195] Run: sudo systemctl restart docker
I1101 23:17:59.822866 286969 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1101 23:17:59.907363 286969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 23:17:59.982927 286969 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I1101 23:17:59.992033 286969 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1101 23:17:59.992078 286969 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1101 23:17:59.995029 286969 start.go:472] Will wait 60s for crictl version
I1101 23:17:59.995089 286969 ssh_runner.go:195] Run: sudo crictl version
I1101 23:18:00.023183 286969 start.go:481] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.20
RuntimeApiVersion: 1.41.0
I1101 23:18:00.023270 286969 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1101 23:18:00.049405 286969 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1101 23:18:00.078662 286969 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
I1101 23:18:00.078734 286969 cli_runner.go:164] Run: docker network inspect calico-231218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1101 23:18:00.100913 286969 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I1101 23:18:00.104442 286969 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1101 23:18:00.115434 286969 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I1101 23:18:00.115487 286969 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1101 23:18:00.137229 286969 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1101 23:18:00.137256 286969 docker.go:543] Images already preloaded, skipping extraction
I1101 23:18:00.137299 286969 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1101 23:18:00.160790 286969 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1101 23:18:00.160815 286969 cache_images.go:84] Images are preloaded, skipping loading
I1101 23:18:00.160858 286969 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1101 23:18:00.229652 286969 cni.go:95] Creating CNI manager for "calico"
I1101 23:18:00.229682 286969 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1101 23:18:00.229703 286969 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-231218 NodeName:calico-231218 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I1101 23:18:00.229855 286969 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "calico-231218"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.25.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1101 23:18:00.229936 286969 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=calico-231218 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.25.3 ClusterName:calico-231218 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
I1101 23:18:00.229981 286969 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
I1101 23:18:00.237094 286969 binaries.go:44] Found k8s binaries, skipping transfer
I1101 23:18:00.237155 286969 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1101 23:18:00.243904 286969 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (475 bytes)
I1101 23:18:00.256399 286969 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1101 23:18:00.268889 286969 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2036 bytes)
I1101 23:18:00.281388 286969 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I1101 23:18:00.284410 286969 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1101 23:18:00.293506 286969 certs.go:54] Setting up /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218 for IP: 192.168.76.2
I1101 23:18:00.293621 286969 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15232-3679/.minikube/ca.key
I1101 23:18:00.293671 286969 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15232-3679/.minikube/proxy-client-ca.key
I1101 23:18:00.293724 286969 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/client.key
I1101 23:18:00.293749 286969 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/client.crt with IP's: []
I1101 23:18:00.391519 286969 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/client.crt ...
I1101 23:18:00.391548 286969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/client.crt: {Name:mka0d7a5798d9f0f7165cd3571a27b7eb11529be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 23:18:00.391739 286969 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/client.key ...
I1101 23:18:00.391757 286969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/client.key: {Name:mk60749c054ada2a7587ca83d94abccd13dd90d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 23:18:00.391898 286969 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/apiserver.key.31bdca25
I1101 23:18:00.391924 286969 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
I1101 23:18:00.520080 286969 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/apiserver.crt.31bdca25 ...
I1101 23:18:00.520107 286969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/apiserver.crt.31bdca25: {Name:mkb4f7310e195177e0b6ed96e271f801ad3875e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 23:18:00.520297 286969 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/apiserver.key.31bdca25 ...
I1101 23:18:00.520314 286969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/apiserver.key.31bdca25: {Name:mk4cc430babddb82c7e4a4aa5b52bfe1d7b96b99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 23:18:00.520440 286969 certs.go:320] copying /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/apiserver.crt
I1101 23:18:00.520516 286969 certs.go:324] copying /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/apiserver.key
I1101 23:18:00.520573 286969 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/proxy-client.key
I1101 23:18:00.520594 286969 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/proxy-client.crt with IP's: []
I1101 23:18:00.701963 286969 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/proxy-client.crt ...
I1101 23:18:00.701993 286969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/proxy-client.crt: {Name:mk2614fa5734c05f277dc121e747a284f708cd00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 23:18:00.702200 286969 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/proxy-client.key ...
I1101 23:18:00.702216 286969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/proxy-client.key: {Name:mka64173f9adc2c5fea86fe9499922afdb6b5a06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 23:18:00.702521 286969 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3679/.minikube/certs/home/jenkins/minikube-integration/15232-3679/.minikube/certs/10122.pem (1338 bytes)
W1101 23:18:00.702577 286969 certs.go:384] ignoring /home/jenkins/minikube-integration/15232-3679/.minikube/certs/home/jenkins/minikube-integration/15232-3679/.minikube/certs/10122_empty.pem, impossibly tiny 0 bytes
I1101 23:18:00.702596 286969 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3679/.minikube/certs/home/jenkins/minikube-integration/15232-3679/.minikube/certs/ca-key.pem (1675 bytes)
I1101 23:18:00.702628 286969 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3679/.minikube/certs/home/jenkins/minikube-integration/15232-3679/.minikube/certs/ca.pem (1078 bytes)
I1101 23:18:00.702653 286969 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3679/.minikube/certs/home/jenkins/minikube-integration/15232-3679/.minikube/certs/cert.pem (1123 bytes)
I1101 23:18:00.702682 286969 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3679/.minikube/certs/home/jenkins/minikube-integration/15232-3679/.minikube/certs/key.pem (1679 bytes)
I1101 23:18:00.702743 286969 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3679/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15232-3679/.minikube/files/etc/ssl/certs/101222.pem (1708 bytes)
I1101 23:18:00.703302 286969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1101 23:18:00.721452 286969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1101 23:18:00.738824 286969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1101 23:18:00.755582 286969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3679/.minikube/profiles/calico-231218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1101 23:18:00.772291 286969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1101 23:18:00.790935 286969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1101 23:18:00.808369 286969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1101 23:18:00.827356 286969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1101 23:18:00.844887 286969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3679/.minikube/files/etc/ssl/certs/101222.pem --> /usr/share/ca-certificates/101222.pem (1708 bytes)
I1101 23:18:00.862023 286969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1101 23:18:00.878785 286969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3679/.minikube/certs/10122.pem --> /usr/share/ca-certificates/10122.pem (1338 bytes)
I1101 23:18:00.895352 286969 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1101 23:18:00.907253 286969 ssh_runner.go:195] Run: openssl version
I1101 23:18:00.911804 286969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101222.pem && ln -fs /usr/share/ca-certificates/101222.pem /etc/ssl/certs/101222.pem"
I1101 23:18:00.919055 286969 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101222.pem
I1101 23:18:00.922184 286969 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov 1 22:50 /usr/share/ca-certificates/101222.pem
I1101 23:18:00.922223 286969 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101222.pem
I1101 23:18:00.927142 286969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101222.pem /etc/ssl/certs/3ec20f2e.0"
I1101 23:18:00.935540 286969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1101 23:18:00.943410 286969 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1101 23:18:00.946429 286969 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov 1 22:46 /usr/share/ca-certificates/minikubeCA.pem
I1101 23:18:00.946482 286969 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1101 23:18:00.951557 286969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1101 23:18:00.958822 286969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10122.pem && ln -fs /usr/share/ca-certificates/10122.pem /etc/ssl/certs/10122.pem"
I1101 23:18:00.966285 286969 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10122.pem
I1101 23:18:00.969576 286969 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov 1 22:50 /usr/share/ca-certificates/10122.pem
I1101 23:18:00.969615 286969 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10122.pem
I1101 23:18:00.975258 286969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10122.pem /etc/ssl/certs/51391683.0"
I1101 23:18:00.983037 286969 kubeadm.go:396] StartCluster: {Name:calico-231218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-231218 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1101 23:18:00.983155 286969 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1101 23:18:01.004727 286969 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1101 23:18:01.011543 286969 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1101 23:18:01.019635 286969 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1101 23:18:01.019678 286969 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1101 23:18:01.026588 286969 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1101 23:18:01.026629 286969 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1101 23:18:01.077891 286969 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
I1101 23:18:01.077971 286969 kubeadm.go:317] [preflight] Running pre-flight checks
I1101 23:18:01.117959 286969 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
I1101 23:18:01.118043 286969 kubeadm.go:317] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
I1101 23:18:01.118107 286969 kubeadm.go:317] [0;37mOS[0m: [0;32mLinux[0m
I1101 23:18:01.118175 286969 kubeadm.go:317] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1101 23:18:01.118242 286969 kubeadm.go:317] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1101 23:18:01.118408 286969 kubeadm.go:317] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1101 23:18:01.118481 286969 kubeadm.go:317] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1101 23:18:01.118558 286969 kubeadm.go:317] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1101 23:18:01.118656 286969 kubeadm.go:317] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1101 23:18:01.118721 286969 kubeadm.go:317] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1101 23:18:01.118771 286969 kubeadm.go:317] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1101 23:18:01.118853 286969 kubeadm.go:317] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1101 23:18:01.187816 286969 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I1101 23:18:01.187933 286969 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1101 23:18:01.188035 286969 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1101 23:18:01.327963 286969 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1101 23:18:01.331629 286969 out.go:204] - Generating certificates and keys ...
I1101 23:18:01.331785 286969 kubeadm.go:317] [certs] Using existing ca certificate authority
I1101 23:18:01.331863 286969 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
I1101 23:18:01.419487 286969 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
I1101 23:18:01.909552 286969 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
I1101 23:18:02.030592 286969 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
I1101 23:18:02.193095 286969 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
I1101 23:18:02.327010 286969 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
I1101 23:18:02.327245 286969 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [calico-231218 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I1101 23:18:02.482374 286969 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
I1101 23:18:02.482614 286969 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [calico-231218 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I1101 23:18:02.582474 286969 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
I1101 23:18:02.697853 286969 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
I1101 23:18:02.930174 286969 kubeadm.go:317] [certs] Generating "sa" key and public key
I1101 23:18:02.930371 286969 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1101 23:18:03.062916 286969 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
I1101 23:18:03.248214 286969 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1101 23:18:03.356442 286969 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1101 23:18:03.552821 286969 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1101 23:18:03.615843 286969 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1101 23:18:03.616972 286969 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1101 23:18:03.617061 286969 kubeadm.go:317] [kubelet-start] Starting the kubelet
I1101 23:18:03.702384 286969 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1101 23:18:03.704692 286969 out.go:204] - Booting up control plane ...
I1101 23:18:03.704818 286969 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1101 23:18:03.706586 286969 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1101 23:18:03.707549 286969 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1101 23:18:03.709844 286969 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1101 23:18:03.711937 286969 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1101 23:18:14.214516 286969 kubeadm.go:317] [apiclient] All control plane components are healthy after 10.502419 seconds
I1101 23:18:14.214734 286969 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1101 23:18:14.225056 286969 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1101 23:18:14.742276 286969 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
I1101 23:18:14.742493 286969 kubeadm.go:317] [mark-control-plane] Marking the node calico-231218 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1101 23:18:15.250749 286969 kubeadm.go:317] [bootstrap-token] Using token: 1yzswc.78a9e1zn8o332w5x
I1101 23:18:15.254627 286969 out.go:204] - Configuring RBAC rules ...
I1101 23:18:15.254798 286969 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1101 23:18:15.256948 286969 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1101 23:18:15.262131 286969 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1101 23:18:15.264361 286969 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1101 23:18:15.266538 286969 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1101 23:18:15.268688 286969 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1101 23:18:15.278759 286969 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1101 23:18:15.497359 286969 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
I1101 23:18:15.662976 286969 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
I1101 23:18:15.704958 286969 kubeadm.go:317]
I1101 23:18:15.705044 286969 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
I1101 23:18:15.705054 286969 kubeadm.go:317]
I1101 23:18:15.705137 286969 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
I1101 23:18:15.705149 286969 kubeadm.go:317]
I1101 23:18:15.705178 286969 kubeadm.go:317] mkdir -p $HOME/.kube
I1101 23:18:15.705241 286969 kubeadm.go:317] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1101 23:18:15.705296 286969 kubeadm.go:317] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1101 23:18:15.705301 286969 kubeadm.go:317]
I1101 23:18:15.705359 286969 kubeadm.go:317] Alternatively, if you are the root user, you can run:
I1101 23:18:15.705364 286969 kubeadm.go:317]
I1101 23:18:15.705416 286969 kubeadm.go:317] export KUBECONFIG=/etc/kubernetes/admin.conf
I1101 23:18:15.705421 286969 kubeadm.go:317]
I1101 23:18:15.705477 286969 kubeadm.go:317] You should now deploy a pod network to the cluster.
I1101 23:18:15.705558 286969 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1101 23:18:15.705631 286969 kubeadm.go:317] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1101 23:18:15.705636 286969 kubeadm.go:317]
I1101 23:18:15.705725 286969 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
I1101 23:18:15.705808 286969 kubeadm.go:317] and service account keys on each node and then running the following as root:
I1101 23:18:15.705814 286969 kubeadm.go:317]
I1101 23:18:15.705908 286969 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 1yzswc.78a9e1zn8o332w5x \
I1101 23:18:15.706018 286969 kubeadm.go:317] --discovery-token-ca-cert-hash sha256:17063cfc53da9cc092112b614dad07561572f5c9b75fad616e993e8768949285 \
I1101 23:18:15.706041 286969 kubeadm.go:317] --control-plane
I1101 23:18:15.706046 286969 kubeadm.go:317]
I1101 23:18:15.706137 286969 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
I1101 23:18:15.706142 286969 kubeadm.go:317]
I1101 23:18:15.706234 286969 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 1yzswc.78a9e1zn8o332w5x \
I1101 23:18:15.706343 286969 kubeadm.go:317] --discovery-token-ca-cert-hash sha256:17063cfc53da9cc092112b614dad07561572f5c9b75fad616e993e8768949285
I1101 23:18:15.709504 286969 kubeadm.go:317] W1101 23:18:01.066789 1196 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I1101 23:18:15.709796 286969 kubeadm.go:317] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
I1101 23:18:15.709940 286969 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1101 23:18:15.709968 286969 cni.go:95] Creating CNI manager for "calico"
I1101 23:18:15.711766 286969 out.go:177] * Configuring Calico (Container Networking Interface) ...
I1101 23:18:15.713244 286969 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
I1101 23:18:15.713267 286969 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202045 bytes)
I1101 23:18:15.816072 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1101 23:18:17.373396 286969 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.557276993s)
I1101 23:18:17.373448 286969 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1101 23:18:17.373571 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:17.373583 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.27.1 minikube.k8s.io/commit=65bfd3dc2bf9824cf305579b01895f56b2ba9210 minikube.k8s.io/name=calico-231218 minikube.k8s.io/updated_at=2022_11_01T23_18_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:17.382347 286969 ops.go:34] apiserver oom_adj: -16
I1101 23:18:17.516597 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:18.110497 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:18.609970 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:19.110655 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:19.610717 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:20.109917 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:20.610481 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:21.110539 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:21.610249 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:22.110391 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:22.610092 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:23.110696 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:23.609948 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:24.110019 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:24.610230 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:25.110420 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:25.610570 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:26.110277 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:26.609766 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:27.109700 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:27.610144 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:28.110249 286969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1101 23:18:28.179393 286969 kubeadm.go:1067] duration metric: took 10.805875161s to wait for elevateKubeSystemPrivileges.
I1101 23:18:28.179437 286969 kubeadm.go:398] StartCluster complete in 27.196400054s
I1101 23:18:28.179459 286969 settings.go:142] acquiring lock: {Name:mkb825ade4fba92b7ccbaaf307df3563d5c155f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 23:18:28.179571 286969 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15232-3679/kubeconfig
I1101 23:18:28.181261 286969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-3679/kubeconfig: {Name:mk576161567a3fa1fe4568c5b7cb0cf93c86cfb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 23:18:28.704529 286969 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-231218" rescaled to 1
I1101 23:18:28.704581 286969 start.go:212] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I1101 23:18:28.707747 286969 out.go:177] * Verifying Kubernetes components...
I1101 23:18:28.704635 286969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1101 23:18:28.704649 286969 addons.go:412] enableAddons start: toEnable=map[], additional=[]
I1101 23:18:28.704810 286969 config.go:180] Loaded profile config "calico-231218": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1101 23:18:28.709263 286969 addons.go:65] Setting storage-provisioner=true in profile "calico-231218"
I1101 23:18:28.709302 286969 addons.go:153] Setting addon storage-provisioner=true in "calico-231218"
W1101 23:18:28.709309 286969 addons.go:162] addon storage-provisioner should already be in state true
I1101 23:18:28.709268 286969 addons.go:65] Setting default-storageclass=true in profile "calico-231218"
I1101 23:18:28.709334 286969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1101 23:18:28.709345 286969 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-231218"
I1101 23:18:28.709356 286969 host.go:66] Checking if "calico-231218" exists ...
I1101 23:18:28.709766 286969 cli_runner.go:164] Run: docker container inspect calico-231218 --format={{.State.Status}}
I1101 23:18:28.709841 286969 cli_runner.go:164] Run: docker container inspect calico-231218 --format={{.State.Status}}
I1101 23:18:28.723727 286969 node_ready.go:35] waiting up to 5m0s for node "calico-231218" to be "Ready" ...
I1101 23:18:28.727432 286969 node_ready.go:49] node "calico-231218" has status "Ready":"True"
I1101 23:18:28.727455 286969 node_ready.go:38] duration metric: took 3.696535ms waiting for node "calico-231218" to be "Ready" ...
I1101 23:18:28.727466 286969 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1101 23:18:28.748331 286969 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace to be "Ready" ...
I1101 23:18:28.771896 286969 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1101 23:18:28.773363 286969 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1101 23:18:28.773383 286969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1101 23:18:28.773420 286969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-231218
I1101 23:18:28.801414 286969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/15232-3679/.minikube/machines/calico-231218/id_rsa Username:docker}
I1101 23:18:28.802853 286969 addons.go:153] Setting addon default-storageclass=true in "calico-231218"
W1101 23:18:28.802879 286969 addons.go:162] addon default-storageclass should already be in state true
I1101 23:18:28.802918 286969 host.go:66] Checking if "calico-231218" exists ...
I1101 23:18:28.803447 286969 cli_runner.go:164] Run: docker container inspect calico-231218 --format={{.State.Status}}
I1101 23:18:28.843281 286969 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
I1101 23:18:28.843323 286969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1101 23:18:28.843379 286969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-231218
I1101 23:18:28.865781 286969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1101 23:18:28.875680 286969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/15232-3679/.minikube/machines/calico-231218/id_rsa Username:docker}
I1101 23:18:28.921494 286969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1101 23:18:29.031185 286969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1101 23:18:30.233708 286969 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.367879089s)
I1101 23:18:30.233743 286969 start.go:826] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
I1101 23:18:30.439688 286969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.408464591s)
I1101 23:18:30.439743 286969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.518220125s)
I1101 23:18:30.442660 286969 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
I1101 23:18:30.443969 286969 addons.go:414] enableAddons completed in 1.739319524s
I1101 23:18:30.806487 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:18:33.306695 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:18:35.811620 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:18:38.265721 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:18:40.805105 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:18:43.264685 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:18:45.305528 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:18:47.806411 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:18:50.268166 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:18:52.764325 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:18:54.765888 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:18:57.266058 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:18:59.805197 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:02.265544 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:04.265941 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:06.265986 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:08.266507 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:10.765503 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:13.265227 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:15.765692 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:17.767605 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:20.305456 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:22.802005 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:25.265702 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:27.265784 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:29.268418 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:31.765553 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:33.802166 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:36.265371 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:38.303349 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:40.315379 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:42.802178 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:44.804754 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:47.266123 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:49.763369 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:51.804046 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:54.265430 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:56.265790 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:19:58.304196 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:00.306039 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:02.803679 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:04.805159 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:07.305075 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:09.764869 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:11.764968 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:13.811413 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:16.306169 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:18.765057 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:20.801969 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:23.265180 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:25.765379 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:27.806291 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:30.265052 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:32.265594 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:34.765116 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:37.265524 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:39.303037 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:41.808619 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:44.265110 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:46.802804 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:49.264227 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:51.265628 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:53.763793 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:55.803643 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:57.806238 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:20:59.806633 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:02.302131 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:04.305949 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:06.765719 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:09.303077 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:11.306273 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:13.802266 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:15.806389 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:18.265679 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:20.305426 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:22.764859 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:24.765315 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:27.264452 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:29.264535 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:31.803620 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:34.264330 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:36.265898 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:38.763947 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:40.765237 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:43.265065 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:45.804870 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:47.806602 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:50.307120 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:52.765739 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:55.264965 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:21:57.806506 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:00.302636 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:02.805880 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:05.302925 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:07.305873 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:09.803105 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:11.804983 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:14.306605 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:16.765016 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:18.806184 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:20.806590 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:23.302680 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:25.803051 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:28.265250 286969 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:28.805975 286969 pod_ready.go:81] duration metric: took 4m0.057537115s waiting for pod "calico-kube-controllers-7df895d496-pl6z6" in "kube-system" namespace to be "Ready" ...
E1101 23:22:28.806008 286969 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
I1101 23:22:28.806020 286969 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-nxkqw" in "kube-system" namespace to be "Ready" ...
I1101 23:22:30.821501 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:33.318381 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:35.818687 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:38.318470 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:40.319298 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:42.817240 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:44.817315 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:47.319013 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:49.818756 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:52.318330 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:54.318556 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:56.318819 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:22:58.318900 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:00.819705 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:03.319386 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:05.819525 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:07.820110 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:10.318035 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:12.318368 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:14.318909 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:16.319033 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:18.818100 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:21.317639 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:23.318727 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:25.817462 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:27.817691 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:30.318796 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:32.818190 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:35.318047 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:37.318111 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:39.819164 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:41.819467 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:44.319377 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:46.818144 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:49.318505 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:51.319004 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:53.818312 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:55.818479 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:23:57.819164 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:00.318392 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:02.318459 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:04.817807 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:06.818930 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:08.819445 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:11.318067 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:13.318421 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:15.319343 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:17.818769 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:19.819030 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:22.318961 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:24.818829 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:27.318154 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:29.318196 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:31.318529 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:33.319853 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:35.817748 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:37.818074 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:39.819006 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:42.318830 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:44.818235 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:46.818552 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:49.318627 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:51.319181 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:53.817820 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:55.818866 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:24:58.318281 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:00.818193 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:03.319263 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:05.818281 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:07.818535 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:09.819075 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:12.318356 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:14.818310 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:17.318949 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:19.818995 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:22.318239 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:24.320293 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:26.817887 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:29.319066 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:31.819430 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:34.318747 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:36.817888 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:38.818588 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:41.317478 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:43.319286 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:45.320606 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:47.821699 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:50.318847 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:52.818675 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:55.318369 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:57.319033 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:25:59.818921 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:26:02.318926 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:26:04.319052 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:26:06.817686 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:26:08.817727 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:26:10.819052 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:26:13.318482 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:26:15.819284 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:26:18.318095 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:26:20.318714 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:26:22.818864 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:26:25.318609 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:26:27.818167 286969 pod_ready.go:102] pod "calico-node-nxkqw" in "kube-system" namespace has status "Ready":"False"
I1101 23:26:28.824101 286969 pod_ready.go:81] duration metric: took 4m0.018067491s waiting for pod "calico-node-nxkqw" in "kube-system" namespace to be "Ready" ...
E1101 23:26:28.824129 286969 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
I1101 23:26:28.824146 286969 pod_ready.go:38] duration metric: took 8m0.096668714s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1101 23:26:28.826672 286969 out.go:177]
W1101 23:26:28.828252 286969 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
W1101 23:26:28.828273 286969 out.go:239] *
*
W1101 23:26:28.829347 286969 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1101 23:26:28.831045 286969 out.go:177]
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (520.93s)