=== RUN TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run: out/minikube-linux-amd64 start -p calico-20220531174030-6903 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker --container-runtime=containerd
E0531 17:44:25.073223 6903 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/addons-20220531171251-6903/client.crt: no such file or directory
=== CONT TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20220531174030-6903 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker --container-runtime=containerd: exit status 80 (8m56.32472053s)
-- stdout --
* [calico-20220531174030-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=14079
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the docker driver based on user configuration
* Using Docker driver with the root privilege
* Starting control plane node calico-20220531174030-6903 in cluster calico-20220531174030-6903
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2048MB) ...
* Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring Calico (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
-- /stdout --
** stderr **
I0531 17:44:22.354146 191945 out.go:296] Setting OutFile to fd 1 ...
I0531 17:44:22.354260 191945 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 17:44:22.354266 191945 out.go:309] Setting ErrFile to fd 2...
I0531 17:44:22.354272 191945 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 17:44:22.354417 191945 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/bin
I0531 17:44:22.354807 191945 out.go:303] Setting JSON to false
I0531 17:44:22.357221 191945 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5213,"bootTime":1654013849,"procs":1400,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0531 17:44:22.357346 191945 start.go:125] virtualization: kvm guest
I0531 17:44:22.360057 191945 out.go:177] * [calico-20220531174030-6903] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
I0531 17:44:22.361667 191945 out.go:177] - MINIKUBE_LOCATION=14079
I0531 17:44:22.361625 191945 notify.go:193] Checking for updates...
I0531 17:44:22.364476 191945 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0531 17:44:22.365903 191945 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
I0531 17:44:22.367368 191945 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube
I0531 17:44:22.368667 191945 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0531 17:44:22.370346 191945 config.go:178] Loaded profile config "cert-expiration-20220531174046-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
I0531 17:44:22.370453 191945 config.go:178] Loaded profile config "cilium-20220531174030-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
I0531 17:44:22.370547 191945 config.go:178] Loaded profile config "kindnet-20220531174029-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
I0531 17:44:22.370606 191945 driver.go:358] Setting default libvirt URI to qemu:///system
I0531 17:44:22.414347 191945 docker.go:137] docker version: linux-20.10.16
I0531 17:44:22.414439 191945 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0531 17:44:22.593205 191945 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-31 17:44:22.466366258 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0531 17:44:22.593308 191945 docker.go:254] overlay module found
I0531 17:44:22.596062 191945 out.go:177] * Using the docker driver based on user configuration
I0531 17:44:22.597362 191945 start.go:284] selected driver: docker
I0531 17:44:22.597373 191945 start.go:806] validating driver "docker" against <nil>
I0531 17:44:22.597389 191945 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0531 17:44:22.598319 191945 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0531 17:44:22.754722 191945 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:63 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-31 17:44:22.64527168 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0531 17:44:22.754891 191945 start_flags.go:292] no existing cluster config was found, will generate one from the flags
I0531 17:44:22.755103 191945 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0531 17:44:22.757000 191945 out.go:177] * Using Docker driver with the root privilege
I0531 17:44:22.758522 191945 cni.go:95] Creating CNI manager for "calico"
I0531 17:44:22.758549 191945 start_flags.go:301] Found "Calico" CNI - setting NetworkPlugin=cni
I0531 17:44:22.758565 191945 start_flags.go:306] config:
{Name:calico-20220531174030-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220531174030-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0531 17:44:22.761373 191945 out.go:177] * Starting control plane node calico-20220531174030-6903 in cluster calico-20220531174030-6903
I0531 17:44:22.762783 191945 cache.go:120] Beginning downloading kic base image for docker with containerd
I0531 17:44:22.764120 191945 out.go:177] * Pulling base image ...
I0531 17:44:22.765614 191945 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
I0531 17:44:22.765656 191945 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
I0531 17:44:22.765669 191945 cache.go:57] Caching tarball of preloaded images
I0531 17:44:22.765715 191945 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
I0531 17:44:22.765940 191945 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0531 17:44:22.765962 191945 cache.go:60] Finished verifying existence of preloaded tar for v1.23.6 on containerd
I0531 17:44:22.766102 191945 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/config.json ...
I0531 17:44:22.766133 191945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/config.json: {Name:mkf8a845c9f4ef689c7f45ebda102574a9d56868 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0531 17:44:22.818206 191945 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
I0531 17:44:22.818242 191945 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
I0531 17:44:22.818258 191945 cache.go:206] Successfully downloaded all kic artifacts
I0531 17:44:22.818302 191945 start.go:352] acquiring machines lock for calico-20220531174030-6903: {Name:mk35e713576d28740afd136b293c99fe6d1e5ac3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0531 17:44:22.818418 191945 start.go:356] acquired machines lock for "calico-20220531174030-6903" in 99.592µs
I0531 17:44:22.818439 191945 start.go:91] Provisioning new machine with config: &{Name:calico-20220531174030-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220531174030-6903 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0531 17:44:22.818545 191945 start.go:131] createHost starting for "" (driver="docker")
I0531 17:44:22.822025 191945 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
I0531 17:44:22.822324 191945 start.go:165] libmachine.API.Create for "calico-20220531174030-6903" (driver="docker")
I0531 17:44:22.822360 191945 client.go:168] LocalClient.Create starting
I0531 17:44:22.822465 191945 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem
I0531 17:44:22.822503 191945 main.go:134] libmachine: Decoding PEM data...
I0531 17:44:22.822527 191945 main.go:134] libmachine: Parsing certificate...
I0531 17:44:22.822611 191945 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem
I0531 17:44:22.822636 191945 main.go:134] libmachine: Decoding PEM data...
I0531 17:44:22.822654 191945 main.go:134] libmachine: Parsing certificate...
I0531 17:44:22.823071 191945 cli_runner.go:164] Run: docker network inspect calico-20220531174030-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0531 17:44:22.870384 191945 cli_runner.go:211] docker network inspect calico-20220531174030-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0531 17:44:22.870462 191945 network_create.go:272] running [docker network inspect calico-20220531174030-6903] to gather additional debugging logs...
I0531 17:44:22.870493 191945 cli_runner.go:164] Run: docker network inspect calico-20220531174030-6903
W0531 17:44:22.903992 191945 cli_runner.go:211] docker network inspect calico-20220531174030-6903 returned with exit code 1
I0531 17:44:22.904023 191945 network_create.go:275] error running [docker network inspect calico-20220531174030-6903]: docker network inspect calico-20220531174030-6903: exit status 1
stdout:
[]
stderr:
Error: No such network: calico-20220531174030-6903
I0531 17:44:22.904040 191945 network_create.go:277] output of [docker network inspect calico-20220531174030-6903]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: calico-20220531174030-6903
** /stderr **
I0531 17:44:22.904084 191945 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0531 17:44:22.963099 191945 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-35512cb7416d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:9e:3b:63:ba}}
I0531 17:44:22.963756 191945 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-1a877c65b8bc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:e6:36:74:9e}}
I0531 17:44:22.964495 191945 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc0005185b0] misses:0}
I0531 17:44:22.964544 191945 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0531 17:44:22.964560 191945 network_create.go:115] attempt to create docker network calico-20220531174030-6903 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I0531 17:44:22.964600 191945 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220531174030-6903
I0531 17:44:23.046115 191945 network_create.go:99] docker network calico-20220531174030-6903 192.168.67.0/24 created
I0531 17:44:23.046160 191945 kic.go:106] calculated static IP "192.168.67.2" for the "calico-20220531174030-6903" container
I0531 17:44:23.046233 191945 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0531 17:44:23.082768 191945 cli_runner.go:164] Run: docker volume create calico-20220531174030-6903 --label name.minikube.sigs.k8s.io=calico-20220531174030-6903 --label created_by.minikube.sigs.k8s.io=true
I0531 17:44:23.115605 191945 oci.go:103] Successfully created a docker volume calico-20220531174030-6903
I0531 17:44:23.115697 191945 cli_runner.go:164] Run: docker run --rm --name calico-20220531174030-6903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220531174030-6903 --entrypoint /usr/bin/test -v calico-20220531174030-6903:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
I0531 17:44:23.770962 191945 oci.go:107] Successfully prepared a docker volume calico-20220531174030-6903
I0531 17:44:23.771016 191945 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
I0531 17:44:23.771037 191945 kic.go:179] Starting extracting preloaded images to volume ...
I0531 17:44:23.771105 191945 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220531174030-6903:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir
I0531 17:44:31.429701 191945 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220531174030-6903:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir: (7.658377533s)
I0531 17:44:31.429751 191945 kic.go:188] duration metric: took 7.658710 seconds to extract preloaded images to volume
W0531 17:44:31.460376 191945 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0531 17:44:31.460551 191945 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0531 17:44:31.580122 191945 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220531174030-6903 --name calico-20220531174030-6903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220531174030-6903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220531174030-6903 --network calico-20220531174030-6903 --ip 192.168.67.2 --volume calico-20220531174030-6903:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418
I0531 17:44:32.000287 191945 cli_runner.go:164] Run: docker container inspect calico-20220531174030-6903 --format={{.State.Running}}
I0531 17:44:32.033785 191945 cli_runner.go:164] Run: docker container inspect calico-20220531174030-6903 --format={{.State.Status}}
I0531 17:44:32.075810 191945 cli_runner.go:164] Run: docker exec calico-20220531174030-6903 stat /var/lib/dpkg/alternatives/iptables
I0531 17:44:32.176825 191945 oci.go:247] the created container "calico-20220531174030-6903" has a running status.
I0531 17:44:32.176855 191945 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/calico-20220531174030-6903/id_rsa...
I0531 17:44:32.372256 191945 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/calico-20220531174030-6903/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0531 17:44:32.483001 191945 cli_runner.go:164] Run: docker container inspect calico-20220531174030-6903 --format={{.State.Status}}
I0531 17:44:32.530410 191945 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0531 17:44:32.530433 191945 kic_runner.go:114] Args: [docker exec --privileged calico-20220531174030-6903 chown docker:docker /home/docker/.ssh/authorized_keys]
I0531 17:44:32.629823 191945 cli_runner.go:164] Run: docker container inspect calico-20220531174030-6903 --format={{.State.Status}}
I0531 17:44:32.665893 191945 machine.go:88] provisioning docker machine ...
I0531 17:44:32.665938 191945 ubuntu.go:169] provisioning hostname "calico-20220531174030-6903"
I0531 17:44:32.665986 191945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531174030-6903
I0531 17:44:32.694790 191945 main.go:134] libmachine: Using SSH client type: native
I0531 17:44:32.694948 191945 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil> [] 0s} 127.0.0.1 49382 <nil> <nil>}
I0531 17:44:32.694964 191945 main.go:134] libmachine: About to run SSH command:
sudo hostname calico-20220531174030-6903 && echo "calico-20220531174030-6903" | sudo tee /etc/hostname
I0531 17:44:32.829913 191945 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-20220531174030-6903
I0531 17:44:32.829999 191945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531174030-6903
I0531 17:44:32.872617 191945 main.go:134] libmachine: Using SSH client type: native
I0531 17:44:32.872781 191945 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil> [] 0s} 127.0.0.1 49382 <nil> <nil>}
I0531 17:44:32.872803 191945 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\scalico-20220531174030-6903' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220531174030-6903/g' /etc/hosts;
else
echo '127.0.1.1 calico-20220531174030-6903' | sudo tee -a /etc/hosts;
fi
fi
I0531 17:44:32.990834 191945 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0531 17:44:32.990867 191945 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube}
I0531 17:44:32.990893 191945 ubuntu.go:177] setting up certificates
I0531 17:44:32.990903 191945 provision.go:83] configureAuth start
I0531 17:44:32.990958 191945 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220531174030-6903
I0531 17:44:33.030021 191945 provision.go:138] copyHostCerts
I0531 17:44:33.030089 191945 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem, removing ...
I0531 17:44:33.030098 191945 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem
I0531 17:44:33.030158 191945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.pem (1078 bytes)
I0531 17:44:33.030258 191945 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem, removing ...
I0531 17:44:33.030267 191945 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem
I0531 17:44:33.030299 191945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/cert.pem (1123 bytes)
I0531 17:44:33.030402 191945 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem, removing ...
I0531 17:44:33.030410 191945 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem
I0531 17:44:33.030446 191945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/key.pem (1679 bytes)
I0531 17:44:33.030515 191945 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem org=jenkins.calico-20220531174030-6903 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220531174030-6903]
I0531 17:44:33.118795 191945 provision.go:172] copyRemoteCerts
I0531 17:44:33.118849 191945 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0531 17:44:33.118877 191945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531174030-6903
I0531 17:44:33.150867 191945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/calico-20220531174030-6903/id_rsa Username:docker}
I0531 17:44:33.234680 191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
I0531 17:44:33.253701 191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0531 17:44:33.270858 191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0531 17:44:33.287300 191945 provision.go:86] duration metric: configureAuth took 296.381936ms
I0531 17:44:33.287324 191945 ubuntu.go:193] setting minikube options for container-runtime
I0531 17:44:33.287477 191945 config.go:178] Loaded profile config "calico-20220531174030-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
I0531 17:44:33.287493 191945 machine.go:91] provisioned docker machine in 621.576994ms
I0531 17:44:33.287500 191945 client.go:171] LocalClient.Create took 10.46512974s
I0531 17:44:33.287525 191945 start.go:173] duration metric: libmachine.API.Create for "calico-20220531174030-6903" took 10.465198487s
I0531 17:44:33.287538 191945 start.go:306] post-start starting for "calico-20220531174030-6903" (driver="docker")
I0531 17:44:33.287545 191945 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0531 17:44:33.287588 191945 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0531 17:44:33.287620 191945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531174030-6903
I0531 17:44:33.320073 191945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/calico-20220531174030-6903/id_rsa Username:docker}
I0531 17:44:33.402855 191945 ssh_runner.go:195] Run: cat /etc/os-release
I0531 17:44:33.406367 191945 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0531 17:44:33.406394 191945 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0531 17:44:33.406404 191945 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0531 17:44:33.406410 191945 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0531 17:44:33.406421 191945 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/addons for local assets ...
I0531 17:44:33.406465 191945 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files for local assets ...
I0531 17:44:33.406550 191945 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem -> 69032.pem in /etc/ssl/certs
I0531 17:44:33.406654 191945 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0531 17:44:33.414799 191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /etc/ssl/certs/69032.pem (1708 bytes)
I0531 17:44:33.436598 191945 start.go:309] post-start completed in 149.043256ms
I0531 17:44:33.436986 191945 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220531174030-6903
I0531 17:44:33.470983 191945 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/config.json ...
I0531 17:44:33.471296 191945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0531 17:44:33.471340 191945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531174030-6903
I0531 17:44:33.500091 191945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/calico-20220531174030-6903/id_rsa Username:docker}
I0531 17:44:33.587261 191945 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0531 17:44:33.590846 191945 start.go:134] duration metric: createHost completed in 10.772290493s
I0531 17:44:33.590868 191945 start.go:81] releasing machines lock for "calico-20220531174030-6903", held for 10.772440336s
I0531 17:44:33.590940 191945 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220531174030-6903
I0531 17:44:33.626802 191945 ssh_runner.go:195] Run: systemctl --version
I0531 17:44:33.626849 191945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531174030-6903
I0531 17:44:33.626912 191945 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0531 17:44:33.626975 191945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531174030-6903
I0531 17:44:33.665227 191945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/calico-20220531174030-6903/id_rsa Username:docker}
I0531 17:44:33.665649 191945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/calico-20220531174030-6903/id_rsa Username:docker}
I0531 17:44:33.760669 191945 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0531 17:44:33.770346 191945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0531 17:44:33.779045 191945 docker.go:187] disabling docker service ...
I0531 17:44:33.779093 191945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0531 17:44:33.794960 191945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0531 17:44:33.803534 191945 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0531 17:44:33.891747 191945 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0531 17:44:33.970396 191945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0531 17:44:33.979700 191945 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0531 17:44:33.992273 191945 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICA
gIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0
gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0LmQiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
I0531 17:44:34.004857 191945 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0531 17:44:34.010770 191945 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0531 17:44:34.016785 191945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0531 17:44:34.089901 191945 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0531 17:44:34.151540 191945 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
I0531 17:44:34.151603 191945 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0531 17:44:34.155023 191945 start.go:468] Will wait 60s for crictl version
I0531 17:44:34.155086 191945 ssh_runner.go:195] Run: sudo crictl version
I0531 17:44:34.180956 191945 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:
stderr:
time="2022-05-31T17:44:34Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
I0531 17:44:45.230326 191945 ssh_runner.go:195] Run: sudo crictl version
I0531 17:44:45.253990 191945 start.go:477] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.6.4
RuntimeApiVersion: v1alpha2
I0531 17:44:45.254056 191945 ssh_runner.go:195] Run: containerd --version
I0531 17:44:45.284157 191945 ssh_runner.go:195] Run: containerd --version
I0531 17:44:45.491293 191945 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
I0531 17:44:45.691954 191945 cli_runner.go:164] Run: docker network inspect calico-20220531174030-6903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0531 17:44:45.723579 191945 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I0531 17:44:45.726893 191945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0531 17:44:45.773556 191945 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
I0531 17:44:45.773625 191945 ssh_runner.go:195] Run: sudo crictl images --output json
I0531 17:44:45.799858 191945 containerd.go:607] all images are preloaded for containerd runtime.
I0531 17:44:45.799882 191945 containerd.go:521] Images already preloaded, skipping extraction
I0531 17:44:45.799933 191945 ssh_runner.go:195] Run: sudo crictl images --output json
I0531 17:44:45.823114 191945 containerd.go:607] all images are preloaded for containerd runtime.
I0531 17:44:45.823135 191945 cache_images.go:84] Images are preloaded, skipping loading
I0531 17:44:45.823215 191945 ssh_runner.go:195] Run: sudo crictl info
I0531 17:44:45.845103 191945 cni.go:95] Creating CNI manager for "calico"
I0531 17:44:45.845127 191945 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0531 17:44:45.845138 191945 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220531174030-6903 NodeName:calico-20220531174030-6903 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var
/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0531 17:44:45.845281 191945 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "calico-20220531174030-6903"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.23.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0531 17:44:45.845355 191945 kubeadm.go:961] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=calico-20220531174030-6903 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.23.6 ClusterName:calico-20220531174030-6903 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
I0531 17:44:45.845395 191945 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
I0531 17:44:45.851958 191945 binaries.go:44] Found k8s binaries, skipping transfer
I0531 17:44:45.852010 191945 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0531 17:44:45.858326 191945 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (540 bytes)
I0531 17:44:45.875725 191945 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0531 17:44:45.888457 191945 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2055 bytes)
I0531 17:44:45.900030 191945 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I0531 17:44:45.902650 191945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0531 17:44:45.969978 191945 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903 for IP: 192.168.67.2
I0531 17:44:45.970099 191945 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key
I0531 17:44:45.970154 191945 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key
I0531 17:44:45.970221 191945 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/client.key
I0531 17:44:45.970240 191945 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/client.crt with IP's: []
I0531 17:44:46.605212 191945 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/client.crt ...
I0531 17:44:46.605248 191945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/client.crt: {Name:mkc5097615ef999d9450cf2656949863c65dc5c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0531 17:44:46.605458 191945 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/client.key ...
I0531 17:44:46.605497 191945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/client.key: {Name:mk53beb1200de52df78bb8197e9ae092f5d8a8cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0531 17:44:46.605640 191945 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.key.c7fa3a9e
I0531 17:44:46.605661 191945 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0531 17:44:46.880556 191945 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.crt.c7fa3a9e ...
I0531 17:44:46.880596 191945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.crt.c7fa3a9e: {Name:mke02f6794e2b012b33c1991ccb19b8dd6fec7be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0531 17:44:46.880804 191945 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.key.c7fa3a9e ...
I0531 17:44:46.880828 191945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.key.c7fa3a9e: {Name:mk05d8a5f1afd92af8b07b6630ac9898a2b66750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0531 17:44:46.880957 191945 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.crt
I0531 17:44:46.881025 191945 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.key
I0531 17:44:46.881075 191945 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/proxy-client.key
I0531 17:44:46.881090 191945 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/proxy-client.crt with IP's: []
I0531 17:44:47.159721 191945 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/proxy-client.crt ...
I0531 17:44:47.159746 191945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/proxy-client.crt: {Name:mk47149c8fd427bd098ee8c80bdf8489ada06105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0531 17:44:47.159905 191945 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/proxy-client.key ...
I0531 17:44:47.159917 191945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/proxy-client.key: {Name:mkbec5b9b21d4f401dd90b7d689951c475a7e3af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0531 17:44:47.160068 191945 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem (1338 bytes)
W0531 17:44:47.160102 191945 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903_empty.pem, impossibly tiny 0 bytes
I0531 17:44:47.160113 191945 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca-key.pem (1675 bytes)
I0531 17:44:47.160138 191945 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/ca.pem (1078 bytes)
I0531 17:44:47.160166 191945 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/cert.pem (1123 bytes)
I0531 17:44:47.160188 191945 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/key.pem (1679 bytes)
I0531 17:44:47.160223 191945 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem (1708 bytes)
I0531 17:44:47.160747 191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0531 17:44:47.225589 191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0531 17:44:47.246657 191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0531 17:44:47.282322 191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/profiles/calico-20220531174030-6903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0531 17:44:47.299593 191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0531 17:44:47.317655 191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0531 17:44:47.335715 191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0531 17:44:47.450769 191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0531 17:44:47.569939 191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0531 17:44:47.587183 191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/certs/6903.pem --> /usr/share/ca-certificates/6903.pem (1338 bytes)
I0531 17:44:47.603541 191945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/files/etc/ssl/certs/69032.pem --> /usr/share/ca-certificates/69032.pem (1708 bytes)
I0531 17:44:47.619803 191945 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
I0531 17:44:47.631602 191945 ssh_runner.go:195] Run: openssl version
I0531 17:44:47.636108 191945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0531 17:44:47.642735 191945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0531 17:44:47.645568 191945 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:13 /usr/share/ca-certificates/minikubeCA.pem
I0531 17:44:47.645604 191945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0531 17:44:47.650018 191945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0531 17:44:47.657148 191945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6903.pem && ln -fs /usr/share/ca-certificates/6903.pem /etc/ssl/certs/6903.pem"
I0531 17:44:47.663941 191945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6903.pem
I0531 17:44:47.666681 191945 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:17 /usr/share/ca-certificates/6903.pem
I0531 17:44:47.666722 191945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6903.pem
I0531 17:44:47.671288 191945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6903.pem /etc/ssl/certs/51391683.0"
I0531 17:44:47.677962 191945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69032.pem && ln -fs /usr/share/ca-certificates/69032.pem /etc/ssl/certs/69032.pem"
I0531 17:44:47.684807 191945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69032.pem
I0531 17:44:47.687610 191945 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:17 /usr/share/ca-certificates/69032.pem
I0531 17:44:47.687650 191945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69032.pem
I0531 17:44:47.692218 191945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69032.pem /etc/ssl/certs/3ec20f2e.0"
I0531 17:44:47.699197 191945 kubeadm.go:395] StartCluster: {Name:calico-20220531174030-6903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220531174030-6903 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false}
I0531 17:44:47.699283 191945 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0531 17:44:47.699312 191945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0531 17:44:47.723918 191945 cri.go:87] found id: ""
I0531 17:44:47.723965 191945 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0531 17:44:47.730813 191945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0531 17:44:47.738631 191945 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0531 17:44:47.738680 191945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0531 17:44:47.745010 191945 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0531 17:44:47.745044 191945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0531 17:44:48.069508 191945 out.go:204] - Generating certificates and keys ...
I0531 17:44:51.103858 191945 out.go:204] - Booting up control plane ...
I0531 17:45:03.153008 191945 out.go:204] - Configuring RBAC rules ...
I0531 17:45:03.567301 191945 cni.go:95] Creating CNI manager for "calico"
I0531 17:45:03.569348 191945 out.go:177] * Configuring Calico (Container Networking Interface) ...
I0531 17:45:03.570760 191945 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
I0531 17:45:03.570785 191945 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
I0531 17:45:03.588499 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0531 17:45:05.707761 191945 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.119199621s)
I0531 17:45:05.707831 191945 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0531 17:45:05.707916 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:05.707925 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=calico-20220531174030-6903 minikube.k8s.io/updated_at=2022_05_31T17_45_05_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:05.716640 191945 ops.go:34] apiserver oom_adj: -16
I0531 17:45:05.810294 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:06.374615 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:06.874730 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:07.374505 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:07.874926 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:08.374114 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:08.874631 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:09.374861 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:09.874161 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:10.374874 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:10.875022 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:11.374069 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:11.874299 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:12.374731 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:12.874431 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:13.374193 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:13.874298 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:14.374091 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:14.874657 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:15.374294 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:15.874195 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:16.374091 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:16.875070 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:17.374644 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:17.874891 191945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0531 17:45:17.945566 191945 kubeadm.go:1045] duration metric: took 12.237702399s to wait for elevateKubeSystemPrivileges.
I0531 17:45:17.945598 191945 kubeadm.go:397] StartCluster complete in 30.246407438s
I0531 17:45:17.945620 191945 settings.go:142] acquiring lock: {Name:mk0d3e0d203b63f9bb5d393308cc5097ea57e33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0531 17:45:17.945717 191945 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig
I0531 17:45:17.946631 191945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/kubeconfig: {Name:mk543aa0e1f519d6e02774aab3a4da7ddc0c2230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0531 17:45:18.462609 191945 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220531174030-6903" rescaled to 1
I0531 17:45:18.462658 191945 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0531 17:45:18.462670 191945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0531 17:45:18.465244 191945 out.go:177] * Verifying Kubernetes components...
I0531 17:45:18.462761 191945 addons.go:415] enableAddons start: toEnable=map[], additional=[]
I0531 17:45:18.462922 191945 config.go:178] Loaded profile config "calico-20220531174030-6903": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
I0531 17:45:18.466777 191945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0531 17:45:18.466801 191945 addons.go:65] Setting storage-provisioner=true in profile "calico-20220531174030-6903"
I0531 17:45:18.466819 191945 addons.go:65] Setting default-storageclass=true in profile "calico-20220531174030-6903"
I0531 17:45:18.466826 191945 addons.go:153] Setting addon storage-provisioner=true in "calico-20220531174030-6903"
W0531 17:45:18.466833 191945 addons.go:165] addon storage-provisioner should already be in state true
I0531 17:45:18.466839 191945 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220531174030-6903"
I0531 17:45:18.466883 191945 host.go:66] Checking if "calico-20220531174030-6903" exists ...
I0531 17:45:18.467269 191945 cli_runner.go:164] Run: docker container inspect calico-20220531174030-6903 --format={{.State.Status}}
I0531 17:45:18.467455 191945 cli_runner.go:164] Run: docker container inspect calico-20220531174030-6903 --format={{.State.Status}}
I0531 17:45:18.515420 191945 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0531 17:45:18.516906 191945 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0531 17:45:18.516928 191945 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0531 17:45:18.516979 191945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531174030-6903
I0531 17:45:18.517318 191945 addons.go:153] Setting addon default-storageclass=true in "calico-20220531174030-6903"
W0531 17:45:18.517338 191945 addons.go:165] addon default-storageclass should already be in state true
I0531 17:45:18.517365 191945 host.go:66] Checking if "calico-20220531174030-6903" exists ...
I0531 17:45:18.517870 191945 cli_runner.go:164] Run: docker container inspect calico-20220531174030-6903 --format={{.State.Status}}
I0531 17:45:18.552388 191945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.67.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0531 17:45:18.553698 191945 node_ready.go:35] waiting up to 5m0s for node "calico-20220531174030-6903" to be "Ready" ...
I0531 17:45:18.557538 191945 node_ready.go:49] node "calico-20220531174030-6903" has status "Ready":"True"
I0531 17:45:18.557559 191945 node_ready.go:38] duration metric: took 3.832995ms waiting for node "calico-20220531174030-6903" to be "Ready" ...
I0531 17:45:18.557569 191945 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0531 17:45:18.562448 191945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/calico-20220531174030-6903/id_rsa Username:docker}
I0531 17:45:18.568738 191945 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace to be "Ready" ...
I0531 17:45:18.574270 191945 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
I0531 17:45:18.574289 191945 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0531 17:45:18.574338 191945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531174030-6903
I0531 17:45:18.618103 191945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14079-3520-bc7278193255a66f30064dc56185dbbc87656da8/.minikube/machines/calico-20220531174030-6903/id_rsa Username:docker}
I0531 17:45:18.731087 191945 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0531 17:45:18.825046 191945 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0531 17:45:19.810559 191945 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.67.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.258134911s)
I0531 17:45:19.810622 191945 start.go:806] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
I0531 17:45:19.848626 191945 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.117501518s)
I0531 17:45:19.848648 191945 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.023563342s)
I0531 17:45:19.850718 191945 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0531 17:45:19.852200 191945 addons.go:417] enableAddons completed in 1.389440529s
I0531 17:45:20.582901 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:45:22.583432 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:45:24.602429 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:45:27.084878 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:45:29.583726 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:45:31.583899 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:45:33.584750 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:45:36.106316 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:45:38.583210 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:45:41.082979 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:45:43.113413 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:45:45.582809 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:45:47.583911 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:45:49.587161 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:45:52.083635 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:45:54.083775 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:45:56.085548 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:45:58.582810 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:01.083232 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:03.582938 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:05.584129 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:08.083014 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:10.583133 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:12.583739 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:15.083603 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:17.583224 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:19.583253 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:22.082892 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:24.083116 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:26.083573 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:28.083613 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:30.582588 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:32.583008 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:34.583279 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:36.583522 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:38.585243 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:41.082825 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:43.082945 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:45.582967 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:47.583386 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:50.082327 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:52.082813 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:54.082883 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:56.583127 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:46:58.583221 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:01.082598 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:03.082696 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:05.082725 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:07.082912 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:09.582618 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:11.583326 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:13.583375 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:16.083457 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:18.583503 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:21.083316 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:23.083531 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:25.083582 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:27.583027 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:29.583132 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:32.083360 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:34.084807 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:36.583585 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:39.082977 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:41.583547 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:44.083010 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:46.583573 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:49.085253 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:51.582961 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:54.083245 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:56.583389 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:47:59.084014 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:01.582977 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:03.583662 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:05.583842 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:08.082954 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:10.083667 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:12.582524 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:14.582599 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:16.582700 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:18.583535 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:21.083270 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:23.083662 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:25.583474 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:27.583937 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:30.082350 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:32.082793 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:34.582514 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:36.583228 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:38.583401 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:41.083185 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:43.083246 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:45.583014 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:48.082931 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:50.582764 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:52.583042 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:55.082803 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:57.083015 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:48:59.083905 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:01.582839 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:03.583046 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:06.083408 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:08.583769 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:11.082575 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:13.582836 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:16.083132 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:18.083443 191945 pod_ready.go:102] pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:18.586657 191945 pod_ready.go:81] duration metric: took 4m0.017849669s waiting for pod "calico-kube-controllers-8594699699-h65gz" in "kube-system" namespace to be "Ready" ...
E0531 17:49:18.586682 191945 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
I0531 17:49:18.586690 191945 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-49qlm" in "kube-system" namespace to be "Ready" ...
I0531 17:49:20.597662 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:22.597812 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:25.097004 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:27.098107 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:29.597262 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:31.597657 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:33.600025 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:36.097452 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:38.597030 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:40.597784 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:43.097372 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:45.097482 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:47.097511 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:49.597483 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:51.598479 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:54.097256 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:56.098728 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:49:58.597619 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:01.096983 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:03.097896 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:05.598139 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:07.598205 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:10.097686 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:12.597355 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:15.098066 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:17.599280 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:20.098147 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:22.597586 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:24.598028 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:27.097616 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:29.098059 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:31.597426 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:34.097095 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:36.598046 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:39.098179 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:41.597200 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:43.597754 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:45.599936 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:48.097343 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:50.097699 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:52.597760 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:54.598142 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:57.098450 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:50:59.597440 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:02.097680 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:04.097711 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:06.598215 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:09.097830 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:11.098299 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:13.597754 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:15.597856 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:18.097580 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:20.097709 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:22.597788 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:25.097876 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:27.097933 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:29.597703 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:31.597939 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:34.097057 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:36.098091 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:38.099257 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:40.597798 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:42.598296 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:45.097903 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:47.597361 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:49.597897 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:51.597941 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:54.097538 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:56.597683 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:51:58.597729 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:01.097659 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:03.597384 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:05.598050 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:08.097350 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:10.097788 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:12.597579 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:15.096815 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:17.097812 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:19.597872 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:22.098274 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:24.597635 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:27.097550 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:29.598258 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:32.097513 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:34.097756 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:36.598024 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:39.097134 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:41.097648 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:43.597579 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:46.098458 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:48.597693 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:50.597817 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:53.097571 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:55.598122 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:52:58.097671 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:53:00.596848 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:53:02.597021 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:53:04.597970 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:53:06.598291 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:53:09.097703 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:53:11.597830 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:53:14.098422 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:53:16.597217 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:53:18.598282 191945 pod_ready.go:102] pod "calico-node-49qlm" in "kube-system" namespace has status "Ready":"False"
I0531 17:53:18.602992 191945 pod_ready.go:81] duration metric: took 4m0.016291869s waiting for pod "calico-node-49qlm" in "kube-system" namespace to be "Ready" ...
E0531 17:53:18.603015 191945 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
I0531 17:53:18.603032 191945 pod_ready.go:38] duration metric: took 8m0.045451384s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0531 17:53:18.605621 191945 out.go:177]
W0531 17:53:18.607399 191945 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
W0531 17:53:18.607422 191945 out.go:239] *
*
W0531 17:53:18.608394 191945 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0531 17:53:18.609565 191945 out.go:177]
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (536.35s)