=== RUN TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run: out/minikube-linux-amd64 start -p calico-192756 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker --container-runtime=docker
=== CONT TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-192756 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker --container-runtime=docker: exit status 80 (9m12.851677053s)
-- stdout --
* [calico-192756] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=15232
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/15232-83854/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-83854/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting control plane node calico-192756 in cluster calico-192756
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2048MB) ...
* Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring Calico (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: default-storageclass, storage-provisioner
-- /stdout --
** stderr **
I1031 19:38:38.476725 447403 out.go:296] Setting OutFile to fd 1 ...
I1031 19:38:38.477030 447403 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 19:38:38.477045 447403 out.go:309] Setting ErrFile to fd 2...
I1031 19:38:38.477052 447403 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 19:38:38.477225 447403 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-83854/.minikube/bin
I1031 19:38:38.478055 447403 out.go:303] Setting JSON to false
I1031 19:38:38.480720 447403 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":12070,"bootTime":1667233048,"procs":1188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1031 19:38:38.480806 447403 start.go:126] virtualization: kvm guest
I1031 19:38:38.483672 447403 out.go:177] * [calico-192756] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
I1031 19:38:38.485391 447403 out.go:177] - MINIKUBE_LOCATION=15232
I1031 19:38:38.485357 447403 notify.go:220] Checking for updates...
I1031 19:38:38.488238 447403 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1031 19:38:38.490092 447403 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15232-83854/kubeconfig
I1031 19:38:38.491687 447403 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-83854/.minikube
I1031 19:38:38.493314 447403 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1031 19:38:38.495224 447403 config.go:180] Loaded profile config "auto-192755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1031 19:38:38.495345 447403 config.go:180] Loaded profile config "cilium-192756": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1031 19:38:38.495459 447403 config.go:180] Loaded profile config "default-k8s-diff-port-193414": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1031 19:38:38.495533 447403 driver.go:365] Setting default libvirt URI to qemu:///system
I1031 19:38:38.562203 447403 docker.go:137] docker version: linux-20.10.21
I1031 19:38:38.562355 447403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1031 19:38:38.707305 447403 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-10-31 19:38:38.587674694 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1031 19:38:38.707429 447403 docker.go:254] overlay module found
I1031 19:38:38.709410 447403 out.go:177] * Using the docker driver based on user configuration
I1031 19:38:38.710783 447403 start.go:282] selected driver: docker
I1031 19:38:38.710808 447403 start.go:808] validating driver "docker" against <nil>
I1031 19:38:38.710836 447403 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1031 19:38:38.711813 447403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1031 19:38:38.823379 447403 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-10-31 19:38:38.733779648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1031 19:38:38.823511 447403 start_flags.go:303] no existing cluster config was found, will generate one from the flags
I1031 19:38:38.823674 447403 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1031 19:38:38.825767 447403 out.go:177] * Using Docker driver with root privileges
I1031 19:38:38.827038 447403 cni.go:95] Creating CNI manager for "calico"
I1031 19:38:38.827065 447403 start_flags.go:312] Found "Calico" CNI - setting NetworkPlugin=cni
I1031 19:38:38.827077 447403 start_flags.go:317] config:
{Name:calico-192756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-192756 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1031 19:38:38.828620 447403 out.go:177] * Starting control plane node calico-192756 in cluster calico-192756
I1031 19:38:38.829935 447403 cache.go:120] Beginning downloading kic base image for docker with docker
I1031 19:38:38.831129 447403 out.go:177] * Pulling base image ...
I1031 19:38:38.832569 447403 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I1031 19:38:38.832648 447403 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
I1031 19:38:38.832671 447403 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15232-83854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
I1031 19:38:38.832686 447403 cache.go:57] Caching tarball of preloaded images
I1031 19:38:38.832976 447403 preload.go:174] Found /home/jenkins/minikube-integration/15232-83854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1031 19:38:38.833006 447403 cache.go:60] Finished verifying existence of preloaded tar for v1.25.3 on docker
I1031 19:38:38.833179 447403 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/config.json ...
I1031 19:38:38.833208 447403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/config.json: {Name:mk1fbaee01fb604c5a7cb6baebfdd10cad4352c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1031 19:38:38.859198 447403 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
I1031 19:38:38.859233 447403 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
I1031 19:38:38.859253 447403 cache.go:208] Successfully downloaded all kic artifacts
I1031 19:38:38.859291 447403 start.go:364] acquiring machines lock for calico-192756: {Name:mkd43fddb93ad23188cad5ddad13a9e2f2d5efec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1031 19:38:38.859469 447403 start.go:368] acquired machines lock for "calico-192756" in 143.683µs
I1031 19:38:38.859520 447403 start.go:93] Provisioning new machine with config: &{Name:calico-192756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-192756 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I1031 19:38:38.859624 447403 start.go:125] createHost starting for "" (driver="docker")
I1031 19:38:38.862643 447403 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
I1031 19:38:38.863308 447403 start.go:159] libmachine.API.Create for "calico-192756" (driver="docker")
I1031 19:38:38.863354 447403 client.go:168] LocalClient.Create starting
I1031 19:38:38.863458 447403 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15232-83854/.minikube/certs/ca.pem
I1031 19:38:38.863509 447403 main.go:134] libmachine: Decoding PEM data...
I1031 19:38:38.863542 447403 main.go:134] libmachine: Parsing certificate...
I1031 19:38:38.863650 447403 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15232-83854/.minikube/certs/cert.pem
I1031 19:38:38.863683 447403 main.go:134] libmachine: Decoding PEM data...
I1031 19:38:38.863707 447403 main.go:134] libmachine: Parsing certificate...
I1031 19:38:38.864378 447403 cli_runner.go:164] Run: docker network inspect calico-192756 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1031 19:38:38.893885 447403 cli_runner.go:211] docker network inspect calico-192756 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1031 19:38:38.893978 447403 network_create.go:272] running [docker network inspect calico-192756] to gather additional debugging logs...
I1031 19:38:38.894008 447403 cli_runner.go:164] Run: docker network inspect calico-192756
W1031 19:38:38.923971 447403 cli_runner.go:211] docker network inspect calico-192756 returned with exit code 1
I1031 19:38:38.924013 447403 network_create.go:275] error running [docker network inspect calico-192756]: docker network inspect calico-192756: exit status 1
stdout:
[]
stderr:
Error: No such network: calico-192756
I1031 19:38:38.924030 447403 network_create.go:277] output of [docker network inspect calico-192756]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: calico-192756
** /stderr **
I1031 19:38:38.924099 447403 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1031 19:38:38.961573 447403 network.go:246] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-84276c375f30 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ae:de:f5:41}}
I1031 19:38:38.962773 447403 network.go:246] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-4b8cab9433a3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:44:b1:89:d6}}
I1031 19:38:38.963613 447403 network.go:246] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-16c4295c5998 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:36:b3:15:73}}
I1031 19:38:38.964140 447403 network.go:246] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName:br-5557d28bfe80 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:c8:31:49:df}}
I1031 19:38:38.964896 447403 network.go:295] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.85.0:0xc0004b6058] misses:0}
I1031 19:38:38.964934 447403 network.go:241] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1031 19:38:38.964945 447403 network_create.go:115] attempt to create docker network calico-192756 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I1031 19:38:38.964998 447403 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-192756 calico-192756
I1031 19:38:39.038094 447403 network_create.go:99] docker network calico-192756 192.168.85.0/24 created
I1031 19:38:39.038138 447403 kic.go:106] calculated static IP "192.168.85.2" for the "calico-192756" container
I1031 19:38:39.038192 447403 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1031 19:38:39.071730 447403 cli_runner.go:164] Run: docker volume create calico-192756 --label name.minikube.sigs.k8s.io=calico-192756 --label created_by.minikube.sigs.k8s.io=true
I1031 19:38:39.100755 447403 oci.go:103] Successfully created a docker volume calico-192756
I1031 19:38:39.100851 447403 cli_runner.go:164] Run: docker run --rm --name calico-192756-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-192756 --entrypoint /usr/bin/test -v calico-192756:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
I1031 19:38:39.767270 447403 oci.go:107] Successfully prepared a docker volume calico-192756
I1031 19:38:39.767342 447403 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I1031 19:38:39.767376 447403 kic.go:179] Starting extracting preloaded images to volume ...
I1031 19:38:39.767448 447403 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15232-83854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-192756:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
I1031 19:38:43.605407 447403 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15232-83854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-192756:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (3.837842321s)
I1031 19:38:43.605463 447403 kic.go:188] duration metric: took 3.838084 seconds to extract preloaded images to volume
W1031 19:38:43.605630 447403 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1031 19:38:43.605749 447403 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1031 19:38:43.756198 447403 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-192756 --name calico-192756 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-192756 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-192756 --network calico-192756 --ip 192.168.85.2 --volume calico-192756:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
I1031 19:38:44.390138 447403 cli_runner.go:164] Run: docker container inspect calico-192756 --format={{.State.Running}}
I1031 19:38:44.423377 447403 cli_runner.go:164] Run: docker container inspect calico-192756 --format={{.State.Status}}
I1031 19:38:44.481520 447403 cli_runner.go:164] Run: docker exec calico-192756 stat /var/lib/dpkg/alternatives/iptables
I1031 19:38:44.547128 447403 oci.go:144] the created container "calico-192756" has a running status.
I1031 19:38:44.547164 447403 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/15232-83854/.minikube/machines/calico-192756/id_rsa...
I1031 19:38:44.617722 447403 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15232-83854/.minikube/machines/calico-192756/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1031 19:38:44.728011 447403 cli_runner.go:164] Run: docker container inspect calico-192756 --format={{.State.Status}}
I1031 19:38:44.766780 447403 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1031 19:38:44.766808 447403 kic_runner.go:114] Args: [docker exec --privileged calico-192756 chown docker:docker /home/docker/.ssh/authorized_keys]
I1031 19:38:44.857743 447403 cli_runner.go:164] Run: docker container inspect calico-192756 --format={{.State.Status}}
I1031 19:38:44.894447 447403 machine.go:88] provisioning docker machine ...
I1031 19:38:44.894494 447403 ubuntu.go:169] provisioning hostname "calico-192756"
I1031 19:38:44.894554 447403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-192756
I1031 19:38:44.921593 447403 main.go:134] libmachine: Using SSH client type: native
I1031 19:38:44.921842 447403 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49434 <nil> <nil>}
I1031 19:38:44.921872 447403 main.go:134] libmachine: About to run SSH command:
sudo hostname calico-192756 && echo "calico-192756" | sudo tee /etc/hostname
I1031 19:38:44.922581 447403 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33398->127.0.0.1:49434: read: connection reset by peer
I1031 19:38:48.054839 447403 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-192756
I1031 19:38:48.054926 447403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-192756
I1031 19:38:48.084118 447403 main.go:134] libmachine: Using SSH client type: native
I1031 19:38:48.084317 447403 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49434 <nil> <nil>}
I1031 19:38:48.084351 447403 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\scalico-192756' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-192756/g' /etc/hosts;
else
echo '127.0.1.1 calico-192756' | sudo tee -a /etc/hosts;
fi
fi
I1031 19:38:48.200781 447403 main.go:134] libmachine: SSH cmd err, output: <nil>:
I1031 19:38:48.200817 447403 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15232-83854/.minikube CaCertPath:/home/jenkins/minikube-integration/15232-83854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15232-83854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15232-83854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15232-83854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15232-83854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15232-83854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15232-83854/.minikube}
I1031 19:38:48.200849 447403 ubuntu.go:177] setting up certificates
I1031 19:38:48.200861 447403 provision.go:83] configureAuth start
I1031 19:38:48.200921 447403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-192756
I1031 19:38:48.228209 447403 provision.go:138] copyHostCerts
I1031 19:38:48.228292 447403 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-83854/.minikube/ca.pem, removing ...
I1031 19:38:48.228307 447403 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-83854/.minikube/ca.pem
I1031 19:38:48.228400 447403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-83854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15232-83854/.minikube/ca.pem (1082 bytes)
I1031 19:38:48.228509 447403 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-83854/.minikube/cert.pem, removing ...
I1031 19:38:48.228525 447403 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-83854/.minikube/cert.pem
I1031 19:38:48.228566 447403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-83854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15232-83854/.minikube/cert.pem (1123 bytes)
I1031 19:38:48.228695 447403 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-83854/.minikube/key.pem, removing ...
I1031 19:38:48.228716 447403 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-83854/.minikube/key.pem
I1031 19:38:48.228759 447403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-83854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15232-83854/.minikube/key.pem (1675 bytes)
I1031 19:38:48.228841 447403 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15232-83854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15232-83854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15232-83854/.minikube/certs/ca-key.pem org=jenkins.calico-192756 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube calico-192756]
I1031 19:38:48.333262 447403 provision.go:172] copyRemoteCerts
I1031 19:38:48.333349 447403 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1031 19:38:48.333406 447403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-192756
I1031 19:38:48.361992 447403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49434 SSHKeyPath:/home/jenkins/minikube-integration/15232-83854/.minikube/machines/calico-192756/id_rsa Username:docker}
I1031 19:38:48.449840 447403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-83854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1031 19:38:48.468669 447403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-83854/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
I1031 19:38:48.489898 447403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-83854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1031 19:38:48.509721 447403 provision.go:86] duration metric: configureAuth took 308.846772ms
I1031 19:38:48.509751 447403 ubuntu.go:193] setting minikube options for container-runtime
I1031 19:38:48.509903 447403 config.go:180] Loaded profile config "calico-192756": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1031 19:38:48.509952 447403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-192756
I1031 19:38:48.538282 447403 main.go:134] libmachine: Using SSH client type: native
I1031 19:38:48.538463 447403 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49434 <nil> <nil>}
I1031 19:38:48.538478 447403 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1031 19:38:48.661098 447403 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
I1031 19:38:48.661123 447403 ubuntu.go:71] root file system type: overlay
I1031 19:38:48.661354 447403 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1031 19:38:48.661433 447403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-192756
I1031 19:38:48.688804 447403 main.go:134] libmachine: Using SSH client type: native
I1031 19:38:48.689023 447403 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49434 <nil> <nil>}
I1031 19:38:48.689133 447403 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1031 19:38:48.814767 447403 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1031 19:38:48.814865 447403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-192756
I1031 19:38:48.843834 447403 main.go:134] libmachine: Using SSH client type: native
I1031 19:38:48.844048 447403 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49434 <nil> <nil>}
I1031 19:38:48.844079 447403 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1031 19:38:51.919155 447403 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2022-10-18 18:18:12.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2022-10-31 19:38:48.810667253 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I1031 19:38:51.919202 447403 machine.go:91] provisioned docker machine in 7.02472779s
I1031 19:38:51.919214 447403 client.go:171] LocalClient.Create took 13.055852381s
I1031 19:38:51.919228 447403 start.go:167] duration metric: libmachine.API.Create for "calico-192756" took 13.055920759s
I1031 19:38:51.919238 447403 start.go:300] post-start starting for "calico-192756" (driver="docker")
I1031 19:38:51.919247 447403 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1031 19:38:51.919318 447403 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1031 19:38:51.919372 447403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-192756
I1031 19:38:51.956662 447403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49434 SSHKeyPath:/home/jenkins/minikube-integration/15232-83854/.minikube/machines/calico-192756/id_rsa Username:docker}
I1031 19:38:52.052885 447403 ssh_runner.go:195] Run: cat /etc/os-release
I1031 19:38:52.055840 447403 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1031 19:38:52.055866 447403 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1031 19:38:52.055880 447403 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1031 19:38:52.055888 447403 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I1031 19:38:52.055900 447403 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-83854/.minikube/addons for local assets ...
I1031 19:38:52.055959 447403 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-83854/.minikube/files for local assets ...
I1031 19:38:52.056052 447403 filesync.go:149] local asset: /home/jenkins/minikube-integration/15232-83854/.minikube/files/etc/ssl/certs/903062.pem -> 903062.pem in /etc/ssl/certs
I1031 19:38:52.056170 447403 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1031 19:38:52.063443 447403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-83854/.minikube/files/etc/ssl/certs/903062.pem --> /etc/ssl/certs/903062.pem (1708 bytes)
I1031 19:38:52.081886 447403 start.go:303] post-start completed in 162.633933ms
I1031 19:38:52.082257 447403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-192756
I1031 19:38:52.107701 447403 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/config.json ...
I1031 19:38:52.107951 447403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1031 19:38:52.108008 447403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-192756
I1031 19:38:52.136805 447403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49434 SSHKeyPath:/home/jenkins/minikube-integration/15232-83854/.minikube/machines/calico-192756/id_rsa Username:docker}
I1031 19:38:52.225277 447403 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1031 19:38:52.229862 447403 start.go:128] duration metric: createHost completed in 13.370222028s
I1031 19:38:52.229887 447403 start.go:83] releasing machines lock for "calico-192756", held for 13.370399938s
I1031 19:38:52.229966 447403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-192756
I1031 19:38:52.258924 447403 ssh_runner.go:195] Run: systemctl --version
I1031 19:38:52.258973 447403 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1031 19:38:52.258987 447403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-192756
I1031 19:38:52.259028 447403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-192756
I1031 19:38:52.286178 447403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49434 SSHKeyPath:/home/jenkins/minikube-integration/15232-83854/.minikube/machines/calico-192756/id_rsa Username:docker}
I1031 19:38:52.286810 447403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49434 SSHKeyPath:/home/jenkins/minikube-integration/15232-83854/.minikube/machines/calico-192756/id_rsa Username:docker}
I1031 19:38:52.418011 447403 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1031 19:38:52.427170 447403 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
I1031 19:38:52.442170 447403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1031 19:38:52.543191 447403 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I1031 19:38:52.672791 447403 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1031 19:38:52.687517 447403 cruntime.go:273] skipping containerd shutdown because we are bound to it
I1031 19:38:52.687592 447403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1031 19:38:52.699400 447403 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1031 19:38:52.713913 447403 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1031 19:38:52.822642 447403 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1031 19:38:52.925988 447403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1031 19:38:53.027595 447403 ssh_runner.go:195] Run: sudo systemctl restart docker
I1031 19:38:53.268938 447403 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1031 19:38:53.362439 447403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1031 19:38:53.443826 447403 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I1031 19:38:53.453957 447403 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1031 19:38:53.454036 447403 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1031 19:38:53.457742 447403 start.go:472] Will wait 60s for crictl version
I1031 19:38:53.457800 447403 ssh_runner.go:195] Run: sudo crictl version
I1031 19:38:53.491704 447403 start.go:481] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.20
RuntimeApiVersion: 1.41.0
I1031 19:38:53.491785 447403 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1031 19:38:53.527531 447403 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1031 19:38:53.564167 447403 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
I1031 19:38:53.564272 447403 cli_runner.go:164] Run: docker network inspect calico-192756 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1031 19:38:53.592017 447403 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I1031 19:38:53.596918 447403 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1031 19:38:53.609552 447403 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I1031 19:38:53.609610 447403 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1031 19:38:53.638238 447403 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1031 19:38:53.638273 447403 docker.go:543] Images already preloaded, skipping extraction
I1031 19:38:53.638336 447403 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1031 19:38:53.665553 447403 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1031 19:38:53.665585 447403 cache_images.go:84] Images are preloaded, skipping loading
I1031 19:38:53.665639 447403 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1031 19:38:53.742144 447403 cni.go:95] Creating CNI manager for "calico"
I1031 19:38:53.742173 447403 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1031 19:38:53.742190 447403 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-192756 NodeName:calico-192756 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I1031 19:38:53.742335 447403 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "calico-192756"
kubeletExtraArgs:
node-ip: 192.168.85.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.25.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1031 19:38:53.742447 447403 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=calico-192756 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.25.3 ClusterName:calico-192756 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
I1031 19:38:53.742496 447403 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
I1031 19:38:53.750252 447403 binaries.go:44] Found k8s binaries, skipping transfer
I1031 19:38:53.750329 447403 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1031 19:38:53.757847 447403 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (475 bytes)
I1031 19:38:53.772053 447403 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1031 19:38:53.786388 447403 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2036 bytes)
I1031 19:38:53.800694 447403 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I1031 19:38:53.803979 447403 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1031 19:38:53.814913 447403 certs.go:54] Setting up /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756 for IP: 192.168.85.2
I1031 19:38:53.815065 447403 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15232-83854/.minikube/ca.key
I1031 19:38:53.815129 447403 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15232-83854/.minikube/proxy-client-ca.key
I1031 19:38:53.815185 447403 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/client.key
I1031 19:38:53.815203 447403 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/client.crt with IP's: []
I1031 19:38:53.962471 447403 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/client.crt ...
I1031 19:38:53.962502 447403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/client.crt: {Name:mk1db0d9eed51631a4214b7f4f18e615ab0c7751 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1031 19:38:53.962707 447403 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/client.key ...
I1031 19:38:53.962720 447403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/client.key: {Name:mkac1aaf81101484d75ce7d20d24dc9426f288dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1031 19:38:53.962810 447403 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/apiserver.key.43b9df8c
I1031 19:38:53.962825 447403 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
I1031 19:38:54.211292 447403 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/apiserver.crt.43b9df8c ...
I1031 19:38:54.211332 447403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/apiserver.crt.43b9df8c: {Name:mk41092097ab4baae6d5d36ec1cf282bde78e686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1031 19:38:54.211530 447403 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/apiserver.key.43b9df8c ...
I1031 19:38:54.211544 447403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/apiserver.key.43b9df8c: {Name:mk12517a427469fc1651986d0fe82721a1168727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1031 19:38:54.211636 447403 certs.go:320] copying /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/apiserver.crt
I1031 19:38:54.211706 447403 certs.go:324] copying /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/apiserver.key
I1031 19:38:54.211765 447403 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/proxy-client.key
I1031 19:38:54.211784 447403 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/proxy-client.crt with IP's: []
I1031 19:38:54.510522 447403 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/proxy-client.crt ...
I1031 19:38:54.510561 447403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/proxy-client.crt: {Name:mk70bb54b454b043763825d03dacfce249b50f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1031 19:38:54.510789 447403 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/proxy-client.key ...
I1031 19:38:54.510809 447403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/proxy-client.key: {Name:mk8290845f75f05c504893984219746dc5a1aff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1031 19:38:54.511011 447403 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-83854/.minikube/certs/home/jenkins/minikube-integration/15232-83854/.minikube/certs/90306.pem (1338 bytes)
W1031 19:38:54.511061 447403 certs.go:384] ignoring /home/jenkins/minikube-integration/15232-83854/.minikube/certs/home/jenkins/minikube-integration/15232-83854/.minikube/certs/90306_empty.pem, impossibly tiny 0 bytes
I1031 19:38:54.511079 447403 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-83854/.minikube/certs/home/jenkins/minikube-integration/15232-83854/.minikube/certs/ca-key.pem (1679 bytes)
I1031 19:38:54.511115 447403 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-83854/.minikube/certs/home/jenkins/minikube-integration/15232-83854/.minikube/certs/ca.pem (1082 bytes)
I1031 19:38:54.511149 447403 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-83854/.minikube/certs/home/jenkins/minikube-integration/15232-83854/.minikube/certs/cert.pem (1123 bytes)
I1031 19:38:54.511181 447403 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-83854/.minikube/certs/home/jenkins/minikube-integration/15232-83854/.minikube/certs/key.pem (1675 bytes)
I1031 19:38:54.511234 447403 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-83854/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15232-83854/.minikube/files/etc/ssl/certs/903062.pem (1708 bytes)
I1031 19:38:54.511835 447403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1031 19:38:54.535878 447403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1031 19:38:54.557965 447403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1031 19:38:54.578881 447403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-83854/.minikube/profiles/calico-192756/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1031 19:38:54.597835 447403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-83854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1031 19:38:54.615644 447403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-83854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1031 19:38:54.633073 447403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-83854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1031 19:38:54.651304 447403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-83854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1031 19:38:54.670168 447403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-83854/.minikube/certs/90306.pem --> /usr/share/ca-certificates/90306.pem (1338 bytes)
I1031 19:38:54.688876 447403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-83854/.minikube/files/etc/ssl/certs/903062.pem --> /usr/share/ca-certificates/903062.pem (1708 bytes)
I1031 19:38:54.706026 447403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-83854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1031 19:38:54.724357 447403 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1031 19:38:54.737911 447403 ssh_runner.go:195] Run: openssl version
I1031 19:38:54.743836 447403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/90306.pem && ln -fs /usr/share/ca-certificates/90306.pem /etc/ssl/certs/90306.pem"
I1031 19:38:54.753531 447403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/90306.pem
I1031 19:38:54.757612 447403 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 31 19:04 /usr/share/ca-certificates/90306.pem
I1031 19:38:54.757665 447403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/90306.pem
I1031 19:38:54.762746 447403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/90306.pem /etc/ssl/certs/51391683.0"
I1031 19:38:54.770754 447403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/903062.pem && ln -fs /usr/share/ca-certificates/903062.pem /etc/ssl/certs/903062.pem"
I1031 19:38:54.779499 447403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/903062.pem
I1031 19:38:54.782770 447403 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 31 19:04 /usr/share/ca-certificates/903062.pem
I1031 19:38:54.782831 447403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/903062.pem
I1031 19:38:54.788542 447403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/903062.pem /etc/ssl/certs/3ec20f2e.0"
I1031 19:38:54.796732 447403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1031 19:38:54.805432 447403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1031 19:38:54.808996 447403 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 31 19:00 /usr/share/ca-certificates/minikubeCA.pem
I1031 19:38:54.809046 447403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1031 19:38:54.813922 447403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1031 19:38:54.821090 447403 kubeadm.go:396] StartCluster: {Name:calico-192756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-192756 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1031 19:38:54.821206 447403 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1031 19:38:54.845455 447403 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1031 19:38:54.852852 447403 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1031 19:38:54.861468 447403 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1031 19:38:54.861521 447403 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1031 19:38:54.870373 447403 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1031 19:38:54.870413 447403 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1031 19:38:54.933850 447403 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
I1031 19:38:54.933936 447403 kubeadm.go:317] [preflight] Running pre-flight checks
I1031 19:38:54.975244 447403 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
I1031 19:38:54.975352 447403 kubeadm.go:317] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
I1031 19:38:54.975409 447403 kubeadm.go:317] [0;37mOS[0m: [0;32mLinux[0m
I1031 19:38:54.975487 447403 kubeadm.go:317] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1031 19:38:54.975550 447403 kubeadm.go:317] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1031 19:38:54.975630 447403 kubeadm.go:317] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1031 19:38:54.975689 447403 kubeadm.go:317] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1031 19:38:54.975746 447403 kubeadm.go:317] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1031 19:38:54.975800 447403 kubeadm.go:317] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1031 19:38:54.975838 447403 kubeadm.go:317] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1031 19:38:54.975879 447403 kubeadm.go:317] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1031 19:38:54.975925 447403 kubeadm.go:317] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1031 19:38:55.059440 447403 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I1031 19:38:55.059602 447403 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1031 19:38:55.059725 447403 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1031 19:38:55.246773 447403 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1031 19:38:55.249676 447403 out.go:204] - Generating certificates and keys ...
I1031 19:38:55.249796 447403 kubeadm.go:317] [certs] Using existing ca certificate authority
I1031 19:38:55.249897 447403 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
I1031 19:38:55.469481 447403 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
I1031 19:38:55.900121 447403 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
I1031 19:38:55.971606 447403 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
I1031 19:38:56.311342 447403 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
I1031 19:38:56.572688 447403 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
I1031 19:38:56.573137 447403 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [calico-192756 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I1031 19:38:56.884906 447403 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
I1031 19:38:56.885120 447403 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [calico-192756 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I1031 19:38:56.966566 447403 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
I1031 19:38:57.108377 447403 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
I1031 19:38:57.270502 447403 kubeadm.go:317] [certs] Generating "sa" key and public key
I1031 19:38:57.270630 447403 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1031 19:38:57.682167 447403 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
I1031 19:38:57.794672 447403 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1031 19:38:57.902759 447403 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1031 19:38:57.991617 447403 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1031 19:38:58.004335 447403 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1031 19:38:58.005442 447403 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1031 19:38:58.005506 447403 kubeadm.go:317] [kubelet-start] Starting the kubelet
I1031 19:38:58.100646 447403 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1031 19:38:58.103017 447403 out.go:204] - Booting up control plane ...
I1031 19:38:58.103168 447403 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1031 19:38:58.103535 447403 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1031 19:38:58.104441 447403 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1031 19:38:58.105329 447403 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1031 19:38:58.107022 447403 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1031 19:39:07.610153 447403 kubeadm.go:317] [apiclient] All control plane components are healthy after 9.503060 seconds
I1031 19:39:07.610323 447403 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1031 19:39:07.620162 447403 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1031 19:39:08.138608 447403 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
I1031 19:39:08.138792 447403 kubeadm.go:317] [mark-control-plane] Marking the node calico-192756 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1031 19:39:08.647404 447403 kubeadm.go:317] [bootstrap-token] Using token: 6sqgkq.ajaenjlv2l9e7c4i
I1031 19:39:08.648968 447403 out.go:204] - Configuring RBAC rules ...
I1031 19:39:08.649117 447403 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1031 19:39:08.654203 447403 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1031 19:39:08.661195 447403 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1031 19:39:08.664459 447403 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1031 19:39:08.667738 447403 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1031 19:39:08.670728 447403 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1031 19:39:08.681988 447403 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1031 19:39:08.915374 447403 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
I1031 19:39:09.058916 447403 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
I1031 19:39:09.060107 447403 kubeadm.go:317]
I1031 19:39:09.060191 447403 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
I1031 19:39:09.060200 447403 kubeadm.go:317]
I1031 19:39:09.060290 447403 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
I1031 19:39:09.060296 447403 kubeadm.go:317]
I1031 19:39:09.060324 447403 kubeadm.go:317] mkdir -p $HOME/.kube
I1031 19:39:09.060397 447403 kubeadm.go:317] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1031 19:39:09.060457 447403 kubeadm.go:317] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1031 19:39:09.060463 447403 kubeadm.go:317]
I1031 19:39:09.060526 447403 kubeadm.go:317] Alternatively, if you are the root user, you can run:
I1031 19:39:09.060533 447403 kubeadm.go:317]
I1031 19:39:09.060664 447403 kubeadm.go:317] export KUBECONFIG=/etc/kubernetes/admin.conf
I1031 19:39:09.060674 447403 kubeadm.go:317]
I1031 19:39:09.060718 447403 kubeadm.go:317] You should now deploy a pod network to the cluster.
I1031 19:39:09.060782 447403 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1031 19:39:09.060842 447403 kubeadm.go:317] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1031 19:39:09.060848 447403 kubeadm.go:317]
I1031 19:39:09.060921 447403 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
I1031 19:39:09.060977 447403 kubeadm.go:317] and service account keys on each node and then running the following as root:
I1031 19:39:09.060981 447403 kubeadm.go:317]
I1031 19:39:09.061090 447403 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 6sqgkq.ajaenjlv2l9e7c4i \
I1031 19:39:09.061202 447403 kubeadm.go:317] --discovery-token-ca-cert-hash sha256:87d4dbe65a7a97c9c70c41e4508d21451f4bba11371b48a2dad7c2d680e2f7df \
I1031 19:39:09.061233 447403 kubeadm.go:317] --control-plane
I1031 19:39:09.061239 447403 kubeadm.go:317]
I1031 19:39:09.061334 447403 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
I1031 19:39:09.061339 447403 kubeadm.go:317]
I1031 19:39:09.061440 447403 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 6sqgkq.ajaenjlv2l9e7c4i \
I1031 19:39:09.061557 447403 kubeadm.go:317] --discovery-token-ca-cert-hash sha256:87d4dbe65a7a97c9c70c41e4508d21451f4bba11371b48a2dad7c2d680e2f7df
I1031 19:39:09.066699 447403 kubeadm.go:317] W1031 19:38:54.924960 1193 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I1031 19:39:09.066909 447403 kubeadm.go:317] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
I1031 19:39:09.067117 447403 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1031 19:39:09.067161 447403 cni.go:95] Creating CNI manager for "calico"
I1031 19:39:09.069204 447403 out.go:177] * Configuring Calico (Container Networking Interface) ...
I1031 19:39:09.070925 447403 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
I1031 19:39:09.070958 447403 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202045 bytes)
I1031 19:39:09.144394 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1031 19:39:10.746061 447403 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.601612943s)
I1031 19:39:10.746113 447403 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1031 19:39:10.746187 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:10.746234 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.27.1 minikube.k8s.io/commit=1c73d673499e72567c9d9cb6c201ec071d452750 minikube.k8s.io/name=calico-192756 minikube.k8s.io/updated_at=2022_10_31T19_39_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:10.887108 447403 ops.go:34] apiserver oom_adj: -16
I1031 19:39:10.887215 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:11.497259 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:11.996777 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:12.497374 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:12.997419 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:13.496891 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:13.997479 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:14.497484 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:14.997563 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:15.497217 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:15.996737 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:16.496748 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:16.997472 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:17.497423 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:17.997457 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:18.496931 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:18.997588 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:19.496716 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:19.997655 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:20.497409 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:20.997087 447403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1031 19:39:21.077009 447403 kubeadm.go:1067] duration metric: took 10.330872038s to wait for elevateKubeSystemPrivileges.
I1031 19:39:21.077047 447403 kubeadm.go:398] StartCluster complete in 26.255965213s
I1031 19:39:21.077070 447403 settings.go:142] acquiring lock: {Name:mk41368cda7c143d6494536997104dc082342409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1031 19:39:21.077204 447403 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15232-83854/kubeconfig
I1031 19:39:21.078791 447403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15232-83854/kubeconfig: {Name:mk42da630ccd6e4d7442a2c56a735e443b41a861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1031 19:39:21.597994 447403 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-192756" rescaled to 1
I1031 19:39:21.598066 447403 start.go:212] Will wait 5m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I1031 19:39:21.599747 447403 out.go:177] * Verifying Kubernetes components...
I1031 19:39:21.598207 447403 addons.go:412] enableAddons start: toEnable=map[], additional=[]
I1031 19:39:21.598403 447403 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1031 19:39:21.598410 447403 config.go:180] Loaded profile config "calico-192756": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1031 19:39:21.601057 447403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1031 19:39:21.601227 447403 addons.go:65] Setting storage-provisioner=true in profile "calico-192756"
I1031 19:39:21.601255 447403 addons.go:153] Setting addon storage-provisioner=true in "calico-192756"
W1031 19:39:21.601264 447403 addons.go:162] addon storage-provisioner should already be in state true
I1031 19:39:21.601329 447403 host.go:66] Checking if "calico-192756" exists ...
I1031 19:39:21.601581 447403 addons.go:65] Setting default-storageclass=true in profile "calico-192756"
I1031 19:39:21.601611 447403 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-192756"
I1031 19:39:21.601939 447403 cli_runner.go:164] Run: docker container inspect calico-192756 --format={{.State.Status}}
I1031 19:39:21.602025 447403 cli_runner.go:164] Run: docker container inspect calico-192756 --format={{.State.Status}}
I1031 19:39:21.631944 447403 node_ready.go:35] waiting up to 5m0s for node "calico-192756" to be "Ready" ...
I1031 19:39:21.637956 447403 node_ready.go:49] node "calico-192756" has status "Ready":"True"
I1031 19:39:21.637981 447403 node_ready.go:38] duration metric: took 5.999466ms waiting for node "calico-192756" to be "Ready" ...
I1031 19:39:21.637992 447403 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1031 19:39:21.648655 447403 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-72zqs" in "kube-system" namespace to be "Ready" ...
I1031 19:39:21.653747 447403 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1031 19:39:21.657036 447403 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1031 19:39:21.657065 447403 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1031 19:39:21.657146 447403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-192756
I1031 19:39:21.659236 447403 addons.go:153] Setting addon default-storageclass=true in "calico-192756"
W1031 19:39:21.659263 447403 addons.go:162] addon default-storageclass should already be in state true
I1031 19:39:21.659298 447403 host.go:66] Checking if "calico-192756" exists ...
I1031 19:39:21.659835 447403 cli_runner.go:164] Run: docker container inspect calico-192756 --format={{.State.Status}}
I1031 19:39:21.691041 447403 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
I1031 19:39:21.691064 447403 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1031 19:39:21.691112 447403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-192756
I1031 19:39:21.692664 447403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49434 SSHKeyPath:/home/jenkins/minikube-integration/15232-83854/.minikube/machines/calico-192756/id_rsa Username:docker}
I1031 19:39:21.717625 447403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49434 SSHKeyPath:/home/jenkins/minikube-integration/15232-83854/.minikube/machines/calico-192756/id_rsa Username:docker}
I1031 19:39:21.730714 447403 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.85.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1031 19:39:22.057273 447403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1031 19:39:22.061152 447403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1031 19:39:23.737770 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:39:24.245819 447403 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.85.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.515052004s)
I1031 19:39:24.245860 447403 start.go:826] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
I1031 19:39:24.352730 447403 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.291541044s)
I1031 19:39:24.352743 447403 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.295427193s)
I1031 19:39:24.354725 447403 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
I1031 19:39:24.356304 447403 addons.go:414] enableAddons completed in 2.758097077s
I1031 19:39:26.166875 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:39:28.167492 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:39:30.668939 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:39:32.671907 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:39:35.167487 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:39:37.666658 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:39:40.169781 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:39:42.672531 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:39:45.168042 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:39:47.667098 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:39:50.166988 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:39:52.167506 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:39:54.667449 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:39:57.166705 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:39:59.169255 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:01.169630 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:03.667572 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:06.166637 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:08.167009 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:10.167504 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:12.667706 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:15.168065 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:17.665975 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:19.667251 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:22.166184 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:24.167087 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:26.167175 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:28.672890 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:31.169239 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:33.667823 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:36.171004 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:38.667940 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:40.669115 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:43.167244 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:45.666553 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:47.666852 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:49.667008 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:51.667286 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:53.667979 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:56.167617 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:40:58.168137 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:00.666901 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:02.667475 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:04.667854 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:07.166064 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:09.166922 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:11.666452 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:13.667661 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:16.166300 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:18.167594 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:20.667102 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:23.166731 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:25.666838 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:27.667265 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:30.167508 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:32.665977 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:35.166708 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:37.167504 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:39.667288 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:42.166993 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:44.667683 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:47.167681 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:49.666245 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:51.668204 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:54.166213 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:56.167238 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:41:58.167666 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:00.667389 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:03.168104 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:05.667244 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:08.168573 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:10.666222 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:12.668057 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:15.166481 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:17.667126 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:20.165965 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:22.167282 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:24.167967 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:26.667558 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:29.168230 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:31.665801 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:33.667694 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:36.166985 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:38.667384 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:41.167415 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:43.667410 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:46.167098 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:48.167629 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:50.666737 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:52.667517 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:55.167938 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:42:57.668679 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:43:00.165881 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:43:02.166824 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:43:04.166905 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:43:06.667594 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:43:09.166145 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:43:11.166917 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:43:13.667058 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:43:15.667415 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:43:17.667718 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:43:20.166219 447403 pod_ready.go:102] pod "calico-node-72zqs" in "kube-system" namespace has status "Ready":"False"
I1031 19:43:21.674630 447403 pod_ready.go:81] duration metric: took 4m0.025928554s waiting for pod "calico-node-72zqs" in "kube-system" namespace to be "Ready" ...
E1031 19:43:21.674658 447403 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
I1031 19:43:21.674669 447403 pod_ready.go:78] waiting up to 5m0s for pod "etcd-calico-192756" in "kube-system" namespace to be "Ready" ...
I1031 19:43:21.679344 447403 pod_ready.go:92] pod "etcd-calico-192756" in "kube-system" namespace has status "Ready":"True"
I1031 19:43:21.679365 447403 pod_ready.go:81] duration metric: took 4.687913ms waiting for pod "etcd-calico-192756" in "kube-system" namespace to be "Ready" ...
I1031 19:43:21.679380 447403 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-calico-192756" in "kube-system" namespace to be "Ready" ...
I1031 19:43:21.683900 447403 pod_ready.go:92] pod "kube-apiserver-calico-192756" in "kube-system" namespace has status "Ready":"True"
I1031 19:43:21.683919 447403 pod_ready.go:81] duration metric: took 4.522701ms waiting for pod "kube-apiserver-calico-192756" in "kube-system" namespace to be "Ready" ...
I1031 19:43:21.683928 447403 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-calico-192756" in "kube-system" namespace to be "Ready" ...
I1031 19:43:21.688504 447403 pod_ready.go:92] pod "kube-controller-manager-calico-192756" in "kube-system" namespace has status "Ready":"True"
I1031 19:43:21.688528 447403 pod_ready.go:81] duration metric: took 4.592783ms waiting for pod "kube-controller-manager-calico-192756" in "kube-system" namespace to be "Ready" ...
I1031 19:43:21.688542 447403 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-cslqd" in "kube-system" namespace to be "Ready" ...
I1031 19:43:22.063886 447403 pod_ready.go:92] pod "kube-proxy-cslqd" in "kube-system" namespace has status "Ready":"True"
I1031 19:43:22.063914 447403 pod_ready.go:81] duration metric: took 375.36355ms waiting for pod "kube-proxy-cslqd" in "kube-system" namespace to be "Ready" ...
I1031 19:43:22.063926 447403 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-calico-192756" in "kube-system" namespace to be "Ready" ...
I1031 19:43:22.463027 447403 pod_ready.go:92] pod "kube-scheduler-calico-192756" in "kube-system" namespace has status "Ready":"True"
I1031 19:43:22.463061 447403 pod_ready.go:81] duration metric: took 399.115081ms waiting for pod "kube-scheduler-calico-192756" in "kube-system" namespace to be "Ready" ...
I1031 19:43:22.463069 447403 pod_ready.go:38] duration metric: took 4m0.825066461s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1031 19:43:22.463095 447403 api_server.go:51] waiting for apiserver process to appear ...
I1031 19:43:22.463153 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1031 19:43:22.487830 447403 logs.go:274] 1 containers: [f28e58feb1c7]
I1031 19:43:22.487901 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1031 19:43:22.517091 447403 logs.go:274] 1 containers: [6e86659c8dda]
I1031 19:43:22.517168 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1031 19:43:22.546910 447403 logs.go:274] 0 containers: []
W1031 19:43:22.546944 447403 logs.go:276] No container was found matching "coredns"
I1031 19:43:22.547018 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1031 19:43:22.576713 447403 logs.go:274] 1 containers: [8c082ecdf701]
I1031 19:43:22.576799 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1031 19:43:22.650097 447403 logs.go:274] 1 containers: [dd104d0da4c8]
I1031 19:43:22.650190 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I1031 19:43:22.742036 447403 logs.go:274] 0 containers: []
W1031 19:43:22.742069 447403 logs.go:276] No container was found matching "kubernetes-dashboard"
I1031 19:43:22.742155 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I1031 19:43:22.774270 447403 logs.go:274] 2 containers: [8f50371d2fd3 4d81a1e223ec]
I1031 19:43:22.774387 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1031 19:43:22.843725 447403 logs.go:274] 1 containers: [608d3ef89f22]
I1031 19:43:22.843788 447403 logs.go:123] Gathering logs for describe nodes ...
I1031 19:43:22.843806 447403 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1031 19:43:23.069255 447403 logs.go:123] Gathering logs for kube-apiserver [f28e58feb1c7] ...
I1031 19:43:23.069292 447403 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f28e58feb1c7"
I1031 19:43:23.168375 447403 logs.go:123] Gathering logs for kube-proxy [dd104d0da4c8] ...
I1031 19:43:23.168432 447403 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd104d0da4c8"
I1031 19:43:23.251579 447403 logs.go:123] Gathering logs for storage-provisioner [8f50371d2fd3] ...
I1031 19:43:23.251620 447403 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f50371d2fd3"
I1031 19:43:23.338023 447403 logs.go:123] Gathering logs for Docker ...
I1031 19:43:23.338063 447403 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I1031 19:43:23.416280 447403 logs.go:123] Gathering logs for kubelet ...
I1031 19:43:23.416355 447403 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1031 19:43:23.607251 447403 logs.go:123] Gathering logs for etcd [6e86659c8dda] ...
I1031 19:43:23.607301 447403 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e86659c8dda"
I1031 19:43:23.641356 447403 logs.go:123] Gathering logs for kube-scheduler [8c082ecdf701] ...
I1031 19:43:23.641404 447403 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c082ecdf701"
I1031 19:43:23.680203 447403 logs.go:123] Gathering logs for storage-provisioner [4d81a1e223ec] ...
I1031 19:43:23.680241 447403 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d81a1e223ec"
I1031 19:43:23.728779 447403 logs.go:123] Gathering logs for kube-controller-manager [608d3ef89f22] ...
I1031 19:43:23.728822 447403 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 608d3ef89f22"
I1031 19:43:23.791752 447403 logs.go:123] Gathering logs for container status ...
I1031 19:43:23.791794 447403 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1031 19:43:23.840304 447403 logs.go:123] Gathering logs for dmesg ...
I1031 19:43:23.840353 447403 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1031 19:43:26.365528 447403 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1031 19:43:26.377633 447403 api_server.go:71] duration metric: took 4m4.779534536s to wait for apiserver process to appear ...
I1031 19:43:26.377662 447403 api_server.go:87] waiting for apiserver healthz status ...
I1031 19:43:26.377728 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1031 19:43:26.404177 447403 logs.go:274] 1 containers: [f28e58feb1c7]
I1031 19:43:26.404254 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1031 19:43:26.434171 447403 logs.go:274] 1 containers: [6e86659c8dda]
I1031 19:43:26.434251 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1031 19:43:26.458798 447403 logs.go:274] 0 containers: []
W1031 19:43:26.458840 447403 logs.go:276] No container was found matching "coredns"
I1031 19:43:26.458893 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1031 19:43:26.483163 447403 logs.go:274] 1 containers: [8c082ecdf701]
I1031 19:43:26.483237 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1031 19:43:26.505456 447403 logs.go:274] 1 containers: [dd104d0da4c8]
I1031 19:43:26.505539 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I1031 19:43:26.528303 447403 logs.go:274] 0 containers: []
W1031 19:43:26.528330 447403 logs.go:276] No container was found matching "kubernetes-dashboard"
I1031 19:43:26.528380 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I1031 19:43:26.551335 447403 logs.go:274] 1 containers: [8f50371d2fd3]
I1031 19:43:26.551418 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1031 19:43:26.575866 447403 logs.go:274] 1 containers: [608d3ef89f22]
I1031 19:43:26.575903 447403 logs.go:123] Gathering logs for kube-proxy [dd104d0da4c8] ...
I1031 19:43:26.575914 447403 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd104d0da4c8"
I1031 19:43:26.604059 447403 logs.go:123] Gathering logs for storage-provisioner [8f50371d2fd3] ...
I1031 19:43:26.604087 447403 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f50371d2fd3"
I1031 19:43:26.627832 447403 logs.go:123] Gathering logs for kube-controller-manager [608d3ef89f22] ...
I1031 19:43:26.627861 447403 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 608d3ef89f22"
I1031 19:43:26.683281 447403 logs.go:123] Gathering logs for etcd [6e86659c8dda] ...
I1031 19:43:26.683321 447403 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e86659c8dda"
I1031 19:43:26.714000 447403 logs.go:123] Gathering logs for kube-scheduler [8c082ecdf701] ...
I1031 19:43:26.714029 447403 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c082ecdf701"
I1031 19:43:26.747542 447403 logs.go:123] Gathering logs for Docker ...
I1031 19:43:26.747578 447403 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I1031 19:43:26.828953 447403 logs.go:123] Gathering logs for container status ...
I1031 19:43:26.828996 447403 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1031 19:43:26.869416 447403 logs.go:123] Gathering logs for kubelet ...
I1031 19:43:26.869461 447403 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1031 19:43:27.103641 447403 logs.go:123] Gathering logs for dmesg ...
I1031 19:43:27.103676 447403 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1031 19:43:27.149979 447403 logs.go:123] Gathering logs for describe nodes ...
I1031 19:43:27.150016 447403 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1031 19:43:27.441327 447403 logs.go:123] Gathering logs for kube-apiserver [f28e58feb1c7] ...
I1031 19:43:27.441362 447403 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f28e58feb1c7"
I1031 19:43:29.979245 447403 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1031 19:43:29.985213 447403 api_server.go:278] https://192.168.85.2:8443/healthz returned 200:
ok
I1031 19:43:29.986422 447403 api_server.go:140] control plane version: v1.25.3
I1031 19:43:29.986487 447403 api_server.go:130] duration metric: took 3.608816444s to wait for apiserver health ...
I1031 19:43:29.986513 447403 system_pods.go:43] waiting for kube-system pods to appear ...
I1031 19:43:29.986581 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1031 19:43:30.049806 447403 logs.go:274] 1 containers: [f28e58feb1c7]
I1031 19:43:30.049895 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1031 19:43:30.128300 447403 logs.go:274] 1 containers: [6e86659c8dda]
I1031 19:43:30.128447 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1031 19:43:30.155966 447403 logs.go:274] 0 containers: []
W1031 19:43:30.155997 447403 logs.go:276] No container was found matching "coredns"
I1031 19:43:30.156057 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1031 19:43:30.232297 447403 logs.go:274] 1 containers: [8c082ecdf701]
I1031 19:43:30.232375 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1031 19:43:30.259347 447403 logs.go:274] 1 containers: [dd104d0da4c8]
I1031 19:43:30.259434 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I1031 19:43:30.346047 447403 logs.go:274] 0 containers: []
W1031 19:43:30.346075 447403 logs.go:276] No container was found matching "kubernetes-dashboard"
I1031 19:43:30.346134 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I1031 19:43:30.430477 447403 logs.go:274] 1 containers: [8f50371d2fd3]
I1031 19:43:30.430575 447403 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1031 19:43:30.462857 447403 logs.go:274] 1 containers: [608d3ef89f22]
I1031 19:43:30.462905 447403 logs.go:123] Gathering logs for Docker ...
I1031 19:43:30.462922 447403 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I1031 19:43:30.593079 447403 logs.go:123] Gathering logs for container status ...
I1031 19:43:30.593119 447403 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1031 19:43:30.653938 447403 logs.go:123] Gathering logs for describe nodes ...
I1031 19:43:30.653991 447403 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1031 19:43:30.855816 447403 logs.go:123] Gathering logs for kube-scheduler [8c082ecdf701] ...
I1031 19:43:30.855846 447403 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c082ecdf701"
I1031 19:43:30.893894 447403 logs.go:123] Gathering logs for storage-provisioner [8f50371d2fd3] ...
I1031 19:43:30.893927 447403 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f50371d2fd3"
I1031 19:43:30.919187 447403 logs.go:123] Gathering logs for etcd [6e86659c8dda] ...
I1031 19:43:30.919225 447403 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e86659c8dda"
I1031 19:43:30.958443 447403 logs.go:123] Gathering logs for kube-proxy [dd104d0da4c8] ...
I1031 19:43:30.958483 447403 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd104d0da4c8"
I1031 19:43:31.035310 447403 logs.go:123] Gathering logs for kube-controller-manager [608d3ef89f22] ...
I1031 19:43:31.035354 447403 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 608d3ef89f22"
I1031 19:43:31.101776 447403 logs.go:123] Gathering logs for kubelet ...
I1031 19:43:31.101814 447403 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1031 19:43:31.308073 447403 logs.go:123] Gathering logs for dmesg ...
I1031 19:43:31.308118 447403 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1031 19:43:31.350622 447403 logs.go:123] Gathering logs for kube-apiserver [f28e58feb1c7] ...
I1031 19:43:31.350681 447403 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f28e58feb1c7"
I1031 19:43:33.948898 447403 system_pods.go:59] 9 kube-system pods found
I1031 19:43:33.948951 447403 system_pods.go:61] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:43:33.948965 447403 system_pods.go:61] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:43:33.948977 447403 system_pods.go:61] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:43:33.948991 447403 system_pods.go:61] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:43:33.948999 447403 system_pods.go:61] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:43:33.949010 447403 system_pods.go:61] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:43:33.949026 447403 system_pods.go:61] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:43:33.949037 447403 system_pods.go:61] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:43:33.949050 447403 system_pods.go:61] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:43:33.949063 447403 system_pods.go:74] duration metric: took 3.962536273s to wait for pod list to return data ...
I1031 19:43:33.949078 447403 default_sa.go:34] waiting for default service account to be created ...
I1031 19:43:33.951595 447403 default_sa.go:45] found service account: "default"
I1031 19:43:33.951618 447403 default_sa.go:55] duration metric: took 2.528144ms for default service account to be created ...
I1031 19:43:33.951627 447403 system_pods.go:116] waiting for k8s-apps to be running ...
I1031 19:43:33.957265 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:43:33.957296 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:43:33.957305 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:43:33.957315 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:43:33.957320 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:43:33.957325 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:43:33.957333 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:43:33.957338 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:43:33.957345 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:43:33.957351 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:43:33.957373 447403 retry.go:31] will retry after 263.082536ms: missing components: kube-dns
I1031 19:43:34.235879 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:43:34.235914 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:43:34.235927 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:43:34.235939 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:43:34.235946 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:43:34.235954 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:43:34.235963 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:43:34.235974 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:43:34.235981 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:43:34.235995 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:43:34.236017 447403 retry.go:31] will retry after 381.329545ms: missing components: kube-dns
I1031 19:43:34.626267 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:43:34.626311 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:43:34.626324 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:43:34.626337 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:43:34.626345 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:43:34.626355 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:43:34.626367 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:43:34.626374 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:43:34.626388 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:43:34.626405 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:43:34.626424 447403 retry.go:31] will retry after 422.765636ms: missing components: kube-dns
I1031 19:43:35.057593 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:43:35.057628 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:43:35.057638 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:43:35.057646 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:43:35.057650 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:43:35.057655 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:43:35.057659 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:43:35.057664 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:43:35.057669 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:43:35.057677 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:43:35.057693 447403 retry.go:31] will retry after 473.074753ms: missing components: kube-dns
I1031 19:43:35.538955 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:43:35.538988 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:43:35.538997 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:43:35.539005 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:43:35.539009 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:43:35.539014 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:43:35.539019 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:43:35.539024 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:43:35.539028 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:43:35.539039 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:43:35.539053 447403 retry.go:31] will retry after 587.352751ms: missing components: kube-dns
I1031 19:43:36.132684 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:43:36.132719 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:43:36.132730 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:43:36.132739 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:43:36.132744 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:43:36.132749 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:43:36.132754 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:43:36.132763 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:43:36.132769 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:43:36.132774 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:43:36.132793 447403 retry.go:31] will retry after 834.206799ms: missing components: kube-dns
I1031 19:43:36.973735 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:43:36.973769 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:43:36.973783 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:43:36.973795 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:43:36.973803 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:43:36.973814 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:43:36.973819 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:43:36.973828 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:43:36.973832 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:43:36.973843 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:43:36.973863 447403 retry.go:31] will retry after 746.553905ms: missing components: kube-dns
I1031 19:43:37.735103 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:43:37.735147 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:43:37.735163 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:43:37.735176 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:43:37.735184 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:43:37.735195 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:43:37.735209 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:43:37.735224 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:43:37.735237 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:43:37.735247 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:43:37.735273 447403 retry.go:31] will retry after 987.362415ms: missing components: kube-dns
I1031 19:43:38.733883 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:43:38.733926 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:43:38.733940 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:43:38.733952 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:43:38.733964 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:43:38.733977 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:43:38.733989 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:43:38.734000 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:43:38.734011 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:43:38.734024 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:43:38.734051 447403 retry.go:31] will retry after 1.189835008s: missing components: kube-dns
I1031 19:43:39.935064 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:43:39.935106 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:43:39.935119 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:43:39.935130 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:43:39.935137 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:43:39.935158 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:43:39.935166 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:43:39.935178 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:43:39.935185 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:43:39.935199 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:43:39.935219 447403 retry.go:31] will retry after 1.677229867s: missing components: kube-dns
I1031 19:43:41.635019 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:43:41.635062 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:43:41.635076 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:43:41.635091 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:43:41.635104 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:43:41.635109 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:43:41.635115 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:43:41.635129 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:43:41.635142 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:43:41.635155 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:43:41.635181 447403 retry.go:31] will retry after 2.346016261s: missing components: kube-dns
I1031 19:43:43.990078 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:43:43.990127 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:43:43.990141 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:43:43.990153 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:43:43.990162 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:43:43.990170 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:43:43.990176 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:43:43.990184 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:43:43.990196 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:43:43.990205 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:43:43.990232 447403 retry.go:31] will retry after 3.36678925s: missing components: kube-dns
I1031 19:43:47.365456 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:43:47.365497 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:43:47.365511 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:43:47.365522 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:43:47.365529 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:43:47.365537 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:43:47.365544 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:43:47.365560 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:43:47.365567 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:43:47.365576 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:43:47.365600 447403 retry.go:31] will retry after 3.11822781s: missing components: kube-dns
I1031 19:43:50.492830 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:43:50.492875 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:43:50.492889 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:43:50.492902 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:43:50.492912 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:43:50.492927 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:43:50.492935 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:43:50.492950 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:43:50.492957 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:43:50.492973 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:43:50.492997 447403 retry.go:31] will retry after 4.276119362s: missing components: kube-dns
I1031 19:43:54.777699 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:43:54.777736 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:43:54.777745 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:43:54.777755 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:43:54.777759 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:43:54.777764 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:43:54.777771 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:43:54.777777 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:43:54.777784 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:43:54.777796 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:43:54.777817 447403 retry.go:31] will retry after 5.167232101s: missing components: kube-dns
I1031 19:43:59.952504 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:43:59.952535 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:43:59.952544 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:43:59.952552 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:43:59.952558 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:43:59.952563 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:43:59.952570 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:43:59.952576 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:43:59.952583 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:43:59.952634 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:43:59.952658 447403 retry.go:31] will retry after 6.994901864s: missing components: kube-dns
I1031 19:44:06.955803 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:44:06.955842 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:44:06.955855 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:44:06.955866 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:44:06.955874 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:44:06.955886 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:44:06.955899 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:44:06.955916 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:44:06.955926 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:44:06.955937 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:44:06.955961 447403 retry.go:31] will retry after 7.91826225s: missing components: kube-dns
I1031 19:44:14.880928 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:44:14.880962 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:44:14.880971 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:44:14.880980 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:44:14.880985 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:44:14.880992 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:44:14.880997 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:44:14.881002 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:44:14.881010 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:44:14.881015 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:44:14.881034 447403 retry.go:31] will retry after 9.953714808s: missing components: kube-dns
I1031 19:44:24.842651 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:44:24.842688 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:44:24.842697 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:44:24.842705 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:44:24.842710 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:44:24.842717 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:44:24.842722 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:44:24.842727 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:44:24.842731 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:44:24.842740 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:44:24.842760 447403 retry.go:31] will retry after 15.120437328s: missing components: kube-dns
I1031 19:44:39.971837 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:44:39.971878 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:44:39.971891 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:44:39.971903 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:44:39.971910 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:44:39.971917 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:44:39.971924 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:44:39.971935 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:44:39.971951 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:44:39.971960 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:44:39.971980 447403 retry.go:31] will retry after 14.90607158s: missing components: kube-dns
I1031 19:44:54.890529 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:44:54.890565 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:44:54.890574 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:44:54.890582 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:44:54.890587 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:44:54.890606 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:44:54.890613 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:44:54.890618 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:44:54.890626 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:44:54.890632 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:44:54.890650 447403 retry.go:31] will retry after 18.465989061s: missing components: kube-dns
I1031 19:45:13.365045 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:45:13.365081 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:45:13.365090 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:45:13.365098 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:45:13.365103 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:45:13.365111 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:45:13.365118 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:45:13.365125 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:45:13.365135 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:45:13.365142 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running
I1031 19:45:13.365164 447403 retry.go:31] will retry after 25.219510332s: missing components: kube-dns
I1031 19:45:38.593602 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:45:38.593650 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:45:38.593664 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:45:38.593679 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:45:38.593685 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:45:38.593705 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:45:38.593712 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:45:38.593725 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:45:38.593732 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:45:38.593745 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:45:38.593770 447403 retry.go:31] will retry after 35.078569648s: missing components: kube-dns
I1031 19:46:13.682169 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:46:13.682218 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:46:13.682243 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:46:13.682255 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:46:13.682266 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:46:13.682275 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:46:13.682285 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:46:13.682296 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:46:13.682308 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:46:13.682316 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:46:13.682342 447403 retry.go:31] will retry after 50.027701973s: missing components: kube-dns
I1031 19:47:03.733717 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:47:03.733771 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:47:03.733788 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:47:03.733799 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:47:03.733806 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:47:03.733816 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:47:03.733836 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:47:03.733845 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:47:03.733851 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:47:03.733859 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:47:03.733885 447403 retry.go:31] will retry after 47.463338706s: missing components: kube-dns
I1031 19:47:51.234381 447403 system_pods.go:86] 9 kube-system pods found
I1031 19:47:51.234426 447403 system_pods.go:89] "calico-kube-controllers-7df895d496-qjwrq" [f01e4905-2aaf-428f-ae27-5025f3499c56] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
I1031 19:47:51.234442 447403 system_pods.go:89] "calico-node-72zqs" [8b3ac730-55ff-4d82-9f15-d56e45bf8acb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
I1031 19:47:51.234455 447403 system_pods.go:89] "coredns-565d847f94-snhrc" [4dc3edee-365d-4015-af2f-8bf99f6658a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 19:47:51.234462 447403 system_pods.go:89] "etcd-calico-192756" [fe172e12-403f-454a-a351-68765d0b8f23] Running
I1031 19:47:51.234469 447403 system_pods.go:89] "kube-apiserver-calico-192756" [a7301c74-c9b8-41ca-b9cc-94ef6febfc7b] Running
I1031 19:47:51.234476 447403 system_pods.go:89] "kube-controller-manager-calico-192756" [39002219-93ec-41c8-8d04-30a4f0fd1dcc] Running
I1031 19:47:51.234484 447403 system_pods.go:89] "kube-proxy-cslqd" [d5ae8e97-621b-4d33-9bc9-a72fc3d76641] Running
I1031 19:47:51.234498 447403 system_pods.go:89] "kube-scheduler-calico-192756" [b85ca253-62cb-46f2-a745-82ce262ea261] Running
I1031 19:47:51.234507 447403 system_pods.go:89] "storage-provisioner" [d40a462a-f3c2-4164-b12b-68a9f9201e39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 19:47:51.237152 447403 out.go:177]
W1031 19:47:51.238636 447403 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
X Exiting due to GUEST_START: wait 5m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
W1031 19:47:51.238661 447403 out.go:239] *
*
W1031 19:47:51.239679 447403 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1031 19:47:51.241150 447403 out.go:177]
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (552.91s)