=== RUN TestOffline
=== PAUSE TestOffline
=== CONT TestOffline
=== CONT TestOffline
aab_offline_test.go:55: (dbg) Run: out/minikube-linux-amd64 start -p offline-containerd-20220516230448-297512 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker --container-runtime=containerd
=== CONT TestOffline
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p offline-containerd-20220516230448-297512 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker --container-runtime=containerd: exit status 80 (4m56.391088945s)
-- stdout --
* [offline-containerd-20220516230448-297512] minikube v1.26.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=12739
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the docker driver based on user configuration
* Using Docker driver with the root privilege
* Starting control plane node offline-containerd-20220516230448-297512 in cluster offline-containerd-20220516230448-297512
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2048MB) ...
* Found network options:
- HTTP_PROXY=172.16.1.1:1
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
- env HTTP_PROXY=172.16.1.1:1
- kubelet.cni-conf-dir=/etc/cni/net.mk
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
-- /stdout --
** stderr **
I0516 23:04:48.630136 422378 out.go:296] Setting OutFile to fd 1 ...
I0516 23:04:48.630272 422378 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0516 23:04:48.630284 422378 out.go:309] Setting ErrFile to fd 2...
I0516 23:04:48.630291 422378 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0516 23:04:48.630388 422378 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/bin
I0516 23:04:48.630656 422378 out.go:303] Setting JSON to false
I0516 23:04:48.631805 422378 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":13639,"bootTime":1652728650,"procs":429,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0516 23:04:48.631879 422378 start.go:125] virtualization: kvm guest
I0516 23:04:48.633808 422378 out.go:177] * [offline-containerd-20220516230448-297512] minikube v1.26.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
I0516 23:04:48.637429 422378 out.go:177] - MINIKUBE_LOCATION=12739
I0516 23:04:48.635861 422378 notify.go:193] Checking for updates...
I0516 23:04:48.639895 422378 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0516 23:04:48.642007 422378 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/kubeconfig
I0516 23:04:48.654973 422378 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube
I0516 23:04:48.656511 422378 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0516 23:04:48.657930 422378 driver.go:358] Setting default libvirt URI to qemu:///system
I0516 23:04:48.704703 422378 docker.go:137] docker version: linux-20.10.16
I0516 23:04:48.704846 422378 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0516 23:04:48.830295 422378 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:57 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:42 SystemTime:2022-05-16 23:04:48.736957268 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0516 23:04:48.830421 422378 docker.go:254] overlay module found
I0516 23:04:48.832213 422378 out.go:177] * Using the docker driver based on user configuration
I0516 23:04:48.833390 422378 start.go:284] selected driver: docker
I0516 23:04:48.833412 422378 start.go:806] validating driver "docker" against <nil>
I0516 23:04:48.833430 422378 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0516 23:04:48.834244 422378 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0516 23:04:48.959355 422378 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:57 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:39 SystemTime:2022-05-16 23:04:48.867850106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0516 23:04:48.959459 422378 start_flags.go:292] no existing cluster config was found, will generate one from the flags
I0516 23:04:48.959708 422378 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0516 23:04:48.960996 422378 out.go:177] * Using Docker driver with the root privilege
I0516 23:04:48.962152 422378 cni.go:95] Creating CNI manager for ""
I0516 23:04:48.962180 422378 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0516 23:04:48.962200 422378 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I0516 23:04:48.962212 422378 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I0516 23:04:48.962218 422378 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
I0516 23:04:48.962246 422378 start_flags.go:306] config:
{Name:offline-containerd-20220516230448-297512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:offline-containerd-20220516230448-297512 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0516 23:04:48.964123 422378 out.go:177] * Starting control plane node offline-containerd-20220516230448-297512 in cluster offline-containerd-20220516230448-297512
I0516 23:04:48.965355 422378 cache.go:120] Beginning downloading kic base image for docker with containerd
I0516 23:04:48.966690 422378 out.go:177] * Pulling base image ...
I0516 23:04:48.968226 422378 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
I0516 23:04:48.968276 422378 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
I0516 23:04:48.968300 422378 cache.go:57] Caching tarball of preloaded images
I0516 23:04:48.968319 422378 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
I0516 23:04:48.968517 422378 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0516 23:04:48.968536 422378 cache.go:60] Finished verifying existence of preloaded tar for v1.23.6 on containerd
I0516 23:04:48.968891 422378 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/config.json ...
I0516 23:04:48.968920 422378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/config.json: {Name:mkef5d403d99053cbab09d82df581cafcd1abeea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0516 23:04:49.020084 422378 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon, skipping pull
I0516 23:04:49.020120 422378 cache.go:141] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in daemon, skipping load
I0516 23:04:49.020138 422378 cache.go:206] Successfully downloaded all kic artifacts
I0516 23:04:49.020192 422378 start.go:352] acquiring machines lock for offline-containerd-20220516230448-297512: {Name:mk54392beb1f48049845ec86557088949fde186f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0516 23:04:49.020341 422378 start.go:356] acquired machines lock for "offline-containerd-20220516230448-297512" in 121.752µs
I0516 23:04:49.020372 422378 start.go:91] Provisioning new machine with config: &{Name:offline-containerd-20220516230448-297512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:offline-containerd-20220516230448-297512 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0516 23:04:49.020502 422378 start.go:131] createHost starting for "" (driver="docker")
I0516 23:04:49.022363 422378 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
I0516 23:04:49.022577 422378 start.go:165] libmachine.API.Create for "offline-containerd-20220516230448-297512" (driver="docker")
I0516 23:04:49.022608 422378 client.go:168] LocalClient.Create starting
I0516 23:04:49.022695 422378 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/ca.pem
I0516 23:04:49.022740 422378 main.go:134] libmachine: Decoding PEM data...
I0516 23:04:49.022766 422378 main.go:134] libmachine: Parsing certificate...
I0516 23:04:49.022855 422378 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/cert.pem
I0516 23:04:49.022884 422378 main.go:134] libmachine: Decoding PEM data...
I0516 23:04:49.022902 422378 main.go:134] libmachine: Parsing certificate...
I0516 23:04:49.023311 422378 cli_runner.go:164] Run: docker network inspect offline-containerd-20220516230448-297512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0516 23:04:49.058078 422378 cli_runner.go:211] docker network inspect offline-containerd-20220516230448-297512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0516 23:04:49.058162 422378 network_create.go:272] running [docker network inspect offline-containerd-20220516230448-297512] to gather additional debugging logs...
I0516 23:04:49.058193 422378 cli_runner.go:164] Run: docker network inspect offline-containerd-20220516230448-297512
W0516 23:04:49.093717 422378 cli_runner.go:211] docker network inspect offline-containerd-20220516230448-297512 returned with exit code 1
I0516 23:04:49.093750 422378 network_create.go:275] error running [docker network inspect offline-containerd-20220516230448-297512]: docker network inspect offline-containerd-20220516230448-297512: exit status 1
stdout:
[]
stderr:
Error: No such network: offline-containerd-20220516230448-297512
I0516 23:04:49.093777 422378 network_create.go:277] output of [docker network inspect offline-containerd-20220516230448-297512]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: offline-containerd-20220516230448-297512
** /stderr **
I0516 23:04:49.093831 422378 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0516 23:04:49.131458 422378 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000133038] misses:0}
I0516 23:04:49.131515 422378 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 23:04:49.131537 422378 network_create.go:115] attempt to create docker network offline-containerd-20220516230448-297512 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0516 23:04:49.131592 422378 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-containerd-20220516230448-297512
W0516 23:04:49.167215 422378 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-containerd-20220516230448-297512 returned with exit code 1
W0516 23:04:49.167297 422378 network_create.go:107] failed to create docker network offline-containerd-20220516230448-297512 192.168.49.0/24, will retry: subnet is taken
I0516 23:04:49.167840 422378 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-5e23b5e0c3ba IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:8f:7d:26:ce}}
I0516 23:04:49.168344 422378 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000133038 192.168.58.0:0xc0004aa2d8] misses:0}
I0516 23:04:49.168387 422378 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 23:04:49.168406 422378 network_create.go:115] attempt to create docker network offline-containerd-20220516230448-297512 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0516 23:04:49.168470 422378 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-containerd-20220516230448-297512
I0516 23:04:49.253558 422378 network_create.go:99] docker network offline-containerd-20220516230448-297512 192.168.58.0/24 created
I0516 23:04:49.253601 422378 kic.go:106] calculated static IP "192.168.58.2" for the "offline-containerd-20220516230448-297512" container
I0516 23:04:49.253659 422378 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0516 23:04:49.293993 422378 cli_runner.go:164] Run: docker volume create offline-containerd-20220516230448-297512 --label name.minikube.sigs.k8s.io=offline-containerd-20220516230448-297512 --label created_by.minikube.sigs.k8s.io=true
I0516 23:04:49.335186 422378 oci.go:103] Successfully created a docker volume offline-containerd-20220516230448-297512
I0516 23:04:49.335295 422378 cli_runner.go:164] Run: docker run --rm --name offline-containerd-20220516230448-297512-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-containerd-20220516230448-297512 --entrypoint /usr/bin/test -v offline-containerd-20220516230448-297512:/var gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c -d /var/lib
I0516 23:04:50.027821 422378 oci.go:107] Successfully prepared a docker volume offline-containerd-20220516230448-297512
I0516 23:04:50.027883 422378 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
I0516 23:04:50.027924 422378 kic.go:179] Starting extracting preloaded images to volume ...
I0516 23:04:50.027987 422378 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-containerd-20220516230448-297512:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c -I lz4 -xf /preloaded.tar -C /extractDir
I0516 23:05:11.449228 422378 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-containerd-20220516230448-297512:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c -I lz4 -xf /preloaded.tar -C /extractDir: (21.42100768s)
I0516 23:05:11.449333 422378 kic.go:188] duration metric: took 21.421403 seconds to extract preloaded images to volume
W0516 23:05:11.449507 422378 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0516 23:05:11.449667 422378 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0516 23:05:11.569959 422378 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname offline-containerd-20220516230448-297512 --name offline-containerd-20220516230448-297512 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-containerd-20220516230448-297512 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=offline-containerd-20220516230448-297512 --network offline-containerd-20220516230448-297512 --ip 192.168.58.2 --volume offline-containerd-20220516230448-297512:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c
I0516 23:05:12.240942 422378 cli_runner.go:164] Run: docker container inspect offline-containerd-20220516230448-297512 --format={{.State.Running}}
I0516 23:05:12.285082 422378 cli_runner.go:164] Run: docker container inspect offline-containerd-20220516230448-297512 --format={{.State.Status}}
I0516 23:05:12.314243 422378 cli_runner.go:164] Run: docker exec offline-containerd-20220516230448-297512 stat /var/lib/dpkg/alternatives/iptables
I0516 23:05:12.387753 422378 oci.go:144] the created container "offline-containerd-20220516230448-297512" has a running status.
I0516 23:05:12.387790 422378 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/offline-containerd-20220516230448-297512/id_rsa...
I0516 23:05:12.532570 422378 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/offline-containerd-20220516230448-297512/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0516 23:05:12.663224 422378 cli_runner.go:164] Run: docker container inspect offline-containerd-20220516230448-297512 --format={{.State.Status}}
I0516 23:05:12.717501 422378 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0516 23:05:12.717525 422378 kic_runner.go:114] Args: [docker exec --privileged offline-containerd-20220516230448-297512 chown docker:docker /home/docker/.ssh/authorized_keys]
I0516 23:05:12.823656 422378 cli_runner.go:164] Run: docker container inspect offline-containerd-20220516230448-297512 --format={{.State.Status}}
I0516 23:05:12.881588 422378 machine.go:88] provisioning docker machine ...
I0516 23:05:12.881631 422378 ubuntu.go:169] provisioning hostname "offline-containerd-20220516230448-297512"
I0516 23:05:12.881699 422378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220516230448-297512
I0516 23:05:12.952180 422378 main.go:134] libmachine: Using SSH client type: native
I0516 23:05:12.952474 422378 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil> [] 0s} 127.0.0.1 49568 <nil> <nil>}
I0516 23:05:12.952509 422378 main.go:134] libmachine: About to run SSH command:
sudo hostname offline-containerd-20220516230448-297512 && echo "offline-containerd-20220516230448-297512" | sudo tee /etc/hostname
I0516 23:05:13.134171 422378 main.go:134] libmachine: SSH cmd err, output: <nil>: offline-containerd-20220516230448-297512
I0516 23:05:13.134382 422378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220516230448-297512
I0516 23:05:13.217206 422378 main.go:134] libmachine: Using SSH client type: native
I0516 23:05:13.217422 422378 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil> [] 0s} 127.0.0.1 49568 <nil> <nil>}
I0516 23:05:13.217450 422378 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\soffline-containerd-20220516230448-297512' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 offline-containerd-20220516230448-297512/g' /etc/hosts;
else
echo '127.0.1.1 offline-containerd-20220516230448-297512' | sudo tee -a /etc/hosts;
fi
fi
I0516 23:05:13.414308 422378 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0516 23:05:13.414345 422378 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube}
I0516 23:05:13.414377 422378 ubuntu.go:177] setting up certificates
I0516 23:05:13.414392 422378 provision.go:83] configureAuth start
I0516 23:05:13.414457 422378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" offline-containerd-20220516230448-297512
I0516 23:05:13.490140 422378 provision.go:138] copyHostCerts
I0516 23:05:13.490204 422378 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/ca.pem, removing ...
I0516 23:05:13.490217 422378 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/ca.pem
I0516 23:05:13.490278 422378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/ca.pem (1082 bytes)
I0516 23:05:13.490414 422378 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cert.pem, removing ...
I0516 23:05:13.490424 422378 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cert.pem
I0516 23:05:13.490460 422378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cert.pem (1123 bytes)
I0516 23:05:13.490555 422378 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/key.pem, removing ...
I0516 23:05:13.490563 422378 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/key.pem
I0516 23:05:13.490596 422378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/key.pem (1679 bytes)
I0516 23:05:13.490680 422378 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/ca-key.pem org=jenkins.offline-containerd-20220516230448-297512 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube offline-containerd-20220516230448-297512]
I0516 23:05:13.766239 422378 provision.go:172] copyRemoteCerts
I0516 23:05:13.766293 422378 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0516 23:05:13.766400 422378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220516230448-297512
I0516 23:05:13.796678 422378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49568 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/offline-containerd-20220516230448-297512/id_rsa Username:docker}
I0516 23:05:13.888756 422378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0516 23:05:13.906265 422378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
I0516 23:05:13.923452 422378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0516 23:05:13.939725 422378 provision.go:86] duration metric: configureAuth took 525.322122ms
I0516 23:05:13.939748 422378 ubuntu.go:193] setting minikube options for container-runtime
I0516 23:05:13.939964 422378 config.go:178] Loaded profile config "offline-containerd-20220516230448-297512": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
I0516 23:05:13.939982 422378 machine.go:91] provisioned docker machine in 1.058368101s
I0516 23:05:13.939991 422378 client.go:171] LocalClient.Create took 24.917371161s
I0516 23:05:13.940014 422378 start.go:173] duration metric: libmachine.API.Create for "offline-containerd-20220516230448-297512" took 24.917432708s
I0516 23:05:13.940027 422378 start.go:306] post-start starting for "offline-containerd-20220516230448-297512" (driver="docker")
I0516 23:05:13.940041 422378 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0516 23:05:13.940092 422378 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0516 23:05:13.940136 422378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220516230448-297512
I0516 23:05:13.969774 422378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49568 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/offline-containerd-20220516230448-297512/id_rsa Username:docker}
I0516 23:05:14.065382 422378 ssh_runner.go:195] Run: cat /etc/os-release
I0516 23:05:14.068179 422378 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0516 23:05:14.068206 422378 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0516 23:05:14.068221 422378 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0516 23:05:14.068228 422378 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0516 23:05:14.068239 422378 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/addons for local assets ...
I0516 23:05:14.068299 422378 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/files for local assets ...
I0516 23:05:14.068412 422378 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/files/etc/ssl/certs/2975122.pem -> 2975122.pem in /etc/ssl/certs
I0516 23:05:14.068511 422378 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0516 23:05:14.075232 422378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/files/etc/ssl/certs/2975122.pem --> /etc/ssl/certs/2975122.pem (1708 bytes)
I0516 23:05:14.091984 422378 start.go:309] post-start completed in 151.906239ms
I0516 23:05:14.092340 422378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" offline-containerd-20220516230448-297512
I0516 23:05:14.130306 422378 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/config.json ...
I0516 23:05:14.130592 422378 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0516 23:05:14.130637 422378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220516230448-297512
I0516 23:05:14.160527 422378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49568 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/offline-containerd-20220516230448-297512/id_rsa Username:docker}
I0516 23:05:14.249218 422378 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0516 23:05:14.253160 422378 start.go:134] duration metric: createHost completed in 25.232640542s
I0516 23:05:14.253180 422378 start.go:81] releasing machines lock for "offline-containerd-20220516230448-297512", held for 25.232821107s
I0516 23:05:14.253260 422378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" offline-containerd-20220516230448-297512
I0516 23:05:14.287634 422378 out.go:177] * Found network options:
I0516 23:05:14.289389 422378 out.go:177] - HTTP_PROXY=172.16.1.1:1
W0516 23:05:14.290825 422378 out.go:239] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.58.2).
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.58.2).
I0516 23:05:14.292203 422378 out.go:177] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I0516 23:05:14.293563 422378 ssh_runner.go:195] Run: systemctl --version
I0516 23:05:14.293622 422378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220516230448-297512
I0516 23:05:14.293623 422378 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0516 23:05:14.293711 422378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220516230448-297512
I0516 23:05:14.327443 422378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49568 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/offline-containerd-20220516230448-297512/id_rsa Username:docker}
I0516 23:05:14.331516 422378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49568 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/offline-containerd-20220516230448-297512/id_rsa Username:docker}
I0516 23:05:14.440763 422378 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0516 23:05:14.451087 422378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0516 23:05:14.460666 422378 docker.go:187] disabling docker service ...
I0516 23:05:14.460713 422378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0516 23:05:14.479278 422378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0516 23:05:14.488460 422378 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0516 23:05:14.579349 422378 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0516 23:05:14.672331 422378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0516 23:05:14.685429 422378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0516 23:05:14.700913 422378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICA
gIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0
gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
I0516 23:05:14.716913 422378 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0516 23:05:14.724534 422378 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0516 23:05:14.731271 422378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0516 23:05:14.814528 422378 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0516 23:05:14.914566 422378 start.go:456] Will wait 60s for socket path /run/containerd/containerd.sock
I0516 23:05:14.914635 422378 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0516 23:05:14.918714 422378 start.go:477] Will wait 60s for crictl version
I0516 23:05:14.918767 422378 ssh_runner.go:195] Run: sudo crictl version
I0516 23:05:14.946512 422378 start.go:486] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.6.4
RuntimeApiVersion: v1alpha2
I0516 23:05:14.946564 422378 ssh_runner.go:195] Run: containerd --version
I0516 23:05:14.975475 422378 ssh_runner.go:195] Run: containerd --version
I0516 23:05:15.005545 422378 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
I0516 23:05:15.006948 422378 out.go:177] - env HTTP_PROXY=172.16.1.1:1
I0516 23:05:15.008308 422378 cli_runner.go:164] Run: docker network inspect offline-containerd-20220516230448-297512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0516 23:05:15.045724 422378 ssh_runner.go:195] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts
I0516 23:05:15.050644 422378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0516 23:05:15.068709 422378 out.go:177] - kubelet.cni-conf-dir=/etc/cni/net.mk
I0516 23:05:15.070153 422378 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
I0516 23:05:15.070238 422378 ssh_runner.go:195] Run: sudo crictl images --output json
I0516 23:05:15.097663 422378 containerd.go:607] all images are preloaded for containerd runtime.
I0516 23:05:15.097691 422378 containerd.go:521] Images already preloaded, skipping extraction
I0516 23:05:15.097740 422378 ssh_runner.go:195] Run: sudo crictl images --output json
I0516 23:05:15.127504 422378 containerd.go:607] all images are preloaded for containerd runtime.
I0516 23:05:15.127529 422378 cache_images.go:84] Images are preloaded, skipping loading
I0516 23:05:15.127578 422378 ssh_runner.go:195] Run: sudo crictl info
I0516 23:05:15.160578 422378 cni.go:95] Creating CNI manager for ""
I0516 23:05:15.160610 422378 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0516 23:05:15.160670 422378 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0516 23:05:15.160703 422378 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:offline-containerd-20220516230448-297512 NodeName:offline-containerd-20220516230448-297512 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDrive
r:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0516 23:05:15.160897 422378 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.58.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "offline-containerd-20220516230448-297512"
kubeletExtraArgs:
node-ip: 192.168.58.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.23.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0516 23:05:15.161122 422378 kubeadm.go:936] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=offline-containerd-20220516230448-297512 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.23.6 ClusterName:offline-containerd-20220516230448-297512 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0516 23:05:15.161198 422378 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
I0516 23:05:15.168271 422378 binaries.go:44] Found k8s binaries, skipping transfer
I0516 23:05:15.168360 422378 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0516 23:05:15.176189 422378 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (585 bytes)
I0516 23:05:15.190579 422378 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0516 23:05:15.204752 422378 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
I0516 23:05:15.220332 422378 ssh_runner.go:195] Run: grep 192.168.58.2 control-plane.minikube.internal$ /etc/hosts
I0516 23:05:15.224144 422378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0516 23:05:15.235221 422378 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512 for IP: 192.168.58.2
I0516 23:05:15.235335 422378 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/ca.key
I0516 23:05:15.235383 422378 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/proxy-client-ca.key
I0516 23:05:15.235451 422378 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/client.key
I0516 23:05:15.235471 422378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/client.crt with IP's: []
I0516 23:05:15.382522 422378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/client.crt ...
I0516 23:05:15.382554 422378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/client.crt: {Name:mk795f46da9ffe6aa68b6690fcee39cb74c70310 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0516 23:05:15.382727 422378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/client.key ...
I0516 23:05:15.382740 422378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/client.key: {Name:mkfd4c60e1521bc72930e645da5130738404edae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0516 23:05:15.382828 422378 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/apiserver.key.cee25041
I0516 23:05:15.382845 422378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0516 23:05:15.485957 422378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/apiserver.crt.cee25041 ...
I0516 23:05:15.485984 422378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/apiserver.crt.cee25041: {Name:mkefa41ef52b59e4f7bfda1a8c745c6b27b5354d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0516 23:05:15.486144 422378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/apiserver.key.cee25041 ...
I0516 23:05:15.486161 422378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/apiserver.key.cee25041: {Name:mk5f466a299d653e73bf4b76cc202ad88f948c3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0516 23:05:15.486267 422378 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/apiserver.crt
I0516 23:05:15.486338 422378 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/apiserver.key
I0516 23:05:15.486402 422378 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/proxy-client.key
I0516 23:05:15.486424 422378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/proxy-client.crt with IP's: []
I0516 23:05:15.715073 422378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/proxy-client.crt ...
I0516 23:05:15.715110 422378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/proxy-client.crt: {Name:mkf8196945c3ea656627212bba67b708a42d764c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0516 23:05:15.715304 422378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/proxy-client.key ...
I0516 23:05:15.715322 422378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/proxy-client.key: {Name:mk0f6591a4bf542ed1eb356025dc54ce397a8b1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0516 23:05:15.715515 422378 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/297512.pem (1338 bytes)
W0516 23:05:15.715563 422378 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/297512_empty.pem, impossibly tiny 0 bytes
I0516 23:05:15.715583 422378 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/ca-key.pem (1675 bytes)
I0516 23:05:15.715619 422378 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/ca.pem (1082 bytes)
I0516 23:05:15.715653 422378 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/cert.pem (1123 bytes)
I0516 23:05:15.715694 422378 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/key.pem (1679 bytes)
I0516 23:05:15.715763 422378 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/files/etc/ssl/certs/2975122.pem (1708 bytes)
I0516 23:05:15.716409 422378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0516 23:05:15.733620 422378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0516 23:05:15.749790 422378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0516 23:05:15.765408 422378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0516 23:05:15.781090 422378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0516 23:05:15.796331 422378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0516 23:05:15.811624 422378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0516 23:05:15.826919 422378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0516 23:05:15.842054 422378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0516 23:05:15.857512 422378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/297512.pem --> /usr/share/ca-certificates/297512.pem (1338 bytes)
I0516 23:05:15.872796 422378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/files/etc/ssl/certs/2975122.pem --> /usr/share/ca-certificates/2975122.pem (1708 bytes)
I0516 23:05:15.888063 422378 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
I0516 23:05:15.899229 422378 ssh_runner.go:195] Run: openssl version
I0516 23:05:15.903444 422378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0516 23:05:15.910353 422378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0516 23:05:15.913169 422378 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 16 21:53 /usr/share/ca-certificates/minikubeCA.pem
I0516 23:05:15.913214 422378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0516 23:05:15.917561 422378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0516 23:05:15.923996 422378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/297512.pem && ln -fs /usr/share/ca-certificates/297512.pem /etc/ssl/certs/297512.pem"
I0516 23:05:15.930781 422378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/297512.pem
I0516 23:05:15.933538 422378 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 16 21:59 /usr/share/ca-certificates/297512.pem
I0516 23:05:15.933576 422378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/297512.pem
I0516 23:05:15.937836 422378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/297512.pem /etc/ssl/certs/51391683.0"
I0516 23:05:15.944241 422378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2975122.pem && ln -fs /usr/share/ca-certificates/2975122.pem /etc/ssl/certs/2975122.pem"
I0516 23:05:15.950836 422378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2975122.pem
I0516 23:05:15.953602 422378 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 16 21:59 /usr/share/ca-certificates/2975122.pem
I0516 23:05:15.953638 422378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2975122.pem
I0516 23:05:15.957820 422378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2975122.pem /etc/ssl/certs/3ec20f2e.0"
I0516 23:05:15.964209 422378 kubeadm.go:391] StartCluster: {Name:offline-containerd-20220516230448-297512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:offline-containerd-20220516230448-297512 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0516 23:05:15.964313 422378 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0516 23:05:15.964363 422378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0516 23:05:15.986615 422378 cri.go:87] found id: ""
I0516 23:05:15.986651 422378 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0516 23:05:15.992779 422378 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0516 23:05:15.998911 422378 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0516 23:05:15.998946 422378 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0516 23:05:16.004987 422378 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0516 23:05:16.005021 422378 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0516 23:05:30.833097 422378 out.go:204] - Generating certificates and keys ...
I0516 23:05:30.836505 422378 out.go:204] - Booting up control plane ...
I0516 23:05:30.839571 422378 out.go:204] - Configuring RBAC rules ...
I0516 23:05:30.841902 422378 cni.go:95] Creating CNI manager for ""
I0516 23:05:30.841925 422378 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0516 23:05:30.844165 422378 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0516 23:05:30.845474 422378 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0516 23:05:30.849339 422378 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
I0516 23:05:30.849360 422378 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0516 23:05:30.862421 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0516 23:05:31.606233 422378 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0516 23:05:31.606289 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:31.606320 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.0 minikube.k8s.io/commit=8e10bad027676fc4eb80b4901727275dc6ddebc2 minikube.k8s.io/name=offline-containerd-20220516230448-297512 minikube.k8s.io/updated_at=2022_05_16T23_05_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:31.675047 422378 ops.go:34] apiserver oom_adj: -16
I0516 23:05:31.675198 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:32.236548 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:32.736366 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:33.235986 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:33.736946 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:34.236912 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:34.736149 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:35.236069 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:35.736726 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:36.235930 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:36.736606 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:37.235999 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:37.736633 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:38.236558 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:38.735947 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:39.236131 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:39.736242 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:40.236038 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:40.736280 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:41.236083 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:41.736959 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:42.236511 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:42.736585 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:43.235957 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:43.736702 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:44.236007 422378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:05:44.314006 422378 kubeadm.go:1020] duration metric: took 12.707760316s to wait for elevateKubeSystemPrivileges.
I0516 23:05:44.314045 422378 kubeadm.go:393] StartCluster complete in 28.349841976s
I0516 23:05:44.314106 422378 settings.go:142] acquiring lock: {Name:mk9ef5cf2a3a16dfc0f8f117e884e02b4660452f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0516 23:05:44.314255 422378 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/kubeconfig
I0516 23:05:44.314886 422378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/kubeconfig: {Name:mkd95e9ac27518d5cd4baf4bf5f31080484189e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0516 23:05:44.315621 422378 kapi.go:59] client config for offline-containerd-20220516230448-297512: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.m
inikube/profiles/offline-containerd-20220516230448-297512/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1702580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0516 23:05:44.834359 422378 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "offline-containerd-20220516230448-297512" rescaled to 1
I0516 23:05:44.834422 422378 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0516 23:05:44.836426 422378 out.go:177] * Verifying Kubernetes components...
I0516 23:05:44.834464 422378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0516 23:05:44.834510 422378 addons.go:415] enableAddons start: toEnable=map[], additional=[]
I0516 23:05:44.834673 422378 config.go:178] Loaded profile config "offline-containerd-20220516230448-297512": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
I0516 23:05:44.837861 422378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0516 23:05:44.838047 422378 addons.go:65] Setting storage-provisioner=true in profile "offline-containerd-20220516230448-297512"
I0516 23:05:44.838074 422378 addons.go:153] Setting addon storage-provisioner=true in "offline-containerd-20220516230448-297512"
W0516 23:05:44.838087 422378 addons.go:165] addon storage-provisioner should already be in state true
I0516 23:05:44.838155 422378 host.go:66] Checking if "offline-containerd-20220516230448-297512" exists ...
I0516 23:05:44.838709 422378 cli_runner.go:164] Run: docker container inspect offline-containerd-20220516230448-297512 --format={{.State.Status}}
I0516 23:05:44.838873 422378 addons.go:65] Setting default-storageclass=true in profile "offline-containerd-20220516230448-297512"
I0516 23:05:44.838897 422378 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "offline-containerd-20220516230448-297512"
I0516 23:05:44.839203 422378 cli_runner.go:164] Run: docker container inspect offline-containerd-20220516230448-297512 --format={{.State.Status}}
I0516 23:05:44.883380 422378 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0516 23:05:44.884584 422378 kapi.go:59] client config for offline-containerd-20220516230448-297512: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.m
inikube/profiles/offline-containerd-20220516230448-297512/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1702580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0516 23:05:44.886084 422378 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0516 23:05:44.886100 422378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0516 23:05:44.886157 422378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220516230448-297512
I0516 23:05:44.888549 422378 addons.go:153] Setting addon default-storageclass=true in "offline-containerd-20220516230448-297512"
W0516 23:05:44.888570 422378 addons.go:165] addon default-storageclass should already be in state true
I0516 23:05:44.888595 422378 host.go:66] Checking if "offline-containerd-20220516230448-297512" exists ...
I0516 23:05:44.888952 422378 cli_runner.go:164] Run: docker container inspect offline-containerd-20220516230448-297512 --format={{.State.Status}}
I0516 23:05:44.916083 422378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.58.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0516 23:05:44.917129 422378 kapi.go:59] client config for offline-containerd-20220516230448-297512: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/offline-containerd-20220516230448-297512/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.m
inikube/profiles/offline-containerd-20220516230448-297512/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1702580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0516 23:05:44.917486 422378 node_ready.go:35] waiting up to 6m0s for node "offline-containerd-20220516230448-297512" to be "Ready" ...
I0516 23:05:44.928165 422378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49568 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/offline-containerd-20220516230448-297512/id_rsa Username:docker}
I0516 23:05:44.929602 422378 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
I0516 23:05:44.929632 422378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0516 23:05:44.929688 422378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220516230448-297512
I0516 23:05:44.975242 422378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49568 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/offline-containerd-20220516230448-297512/id_rsa Username:docker}
I0516 23:05:45.045173 422378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0516 23:05:45.155535 422378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0516 23:05:45.256795 422378 start.go:815] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
I0516 23:05:45.497172 422378 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0516 23:05:45.498967 422378 addons.go:417] enableAddons completed in 664.480788ms
I0516 23:05:46.924197 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:05:48.924786 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:05:51.424923 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:05:53.425138 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:05:55.924259 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:05:57.924304 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:05:59.925009 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:02.424980 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:04.972868 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:07.424793 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:09.425160 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:11.924733 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:14.424555 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:16.424804 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:18.924617 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:21.424924 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:23.425334 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:25.924616 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:27.925020 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:30.424838 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:32.925114 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:34.925997 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:37.425305 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:39.925215 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:42.424658 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:44.430110 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:46.925147 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:49.425659 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:51.425785 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:53.924830 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:56.424842 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:06:58.424976 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:00.924532 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:02.925263 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:04.925380 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:07.425281 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:09.925191 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:11.925579 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:14.424648 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:16.425109 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:18.924985 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:21.425727 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:23.925301 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:26.424688 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:28.924642 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:30.925011 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:33.425578 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:35.924722 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:37.924771 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:40.424829 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:42.424979 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:44.991983 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:47.424106 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:49.923978 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:51.924835 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:54.425234 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:56.425845 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:07:58.923996 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:00.924501 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:02.924718 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:04.925573 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:07.425804 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:09.559650 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:11.924863 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:13.925158 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:16.424792 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:18.424942 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:20.425387 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:22.924332 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:24.924867 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:26.925095 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:28.925229 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:31.426764 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:33.923926 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:35.925201 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:38.424268 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:40.424782 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:42.425101 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:44.425172 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:46.924606 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:49.425448 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:51.924599 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:53.924752 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:55.925008 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:08:58.425521 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:00.925383 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:03.424342 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:05.424851 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:07.425088 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:09.924106 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:12.424039 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:14.424833 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:16.425871 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:18.925000 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:21.236653 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:23.425333 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:25.924970 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:27.925015 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:30.424783 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:32.924594 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:35.425482 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:37.925010 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:40.425167 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:42.425352 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:44.925083 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:44.927447 422378 node_ready.go:38] duration metric: took 4m0.009934675s waiting for node "offline-containerd-20220516230448-297512" to be "Ready" ...
I0516 23:09:44.930088 422378 out.go:177]
W0516 23:09:44.931496 422378 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
W0516 23:09:44.931518 422378 out.go:239] *
*
W0516 23:09:44.932297 422378 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0516 23:09:44.933812 422378 out.go:177]
** /stderr **
aab_offline_test.go:58: out/minikube-linux-amd64 start -p offline-containerd-20220516230448-297512 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker --container-runtime=containerd failed: exit status 80
panic.go:482: *** TestOffline FAILED at 2022-05-16 23:09:44.957396855 +0000 UTC m=+4639.979519307
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect offline-containerd-20220516230448-297512
helpers_test.go:235: (dbg) docker inspect offline-containerd-20220516230448-297512:
-- stdout --
[
{
"Id": "9c6433fe31a5f3f6902c56c5bd7d128421ed56b1390bde1ec9713792cca2ef22",
"Created": "2022-05-16T23:05:11.603709009Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 424854,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-05-16T23:05:12.230479572Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:0c5d9f8f84652aecf60b51012e4dbd6b63610a21a4eff9bcda47c370186206c5",
"ResolvConfPath": "/var/lib/docker/containers/9c6433fe31a5f3f6902c56c5bd7d128421ed56b1390bde1ec9713792cca2ef22/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/9c6433fe31a5f3f6902c56c5bd7d128421ed56b1390bde1ec9713792cca2ef22/hostname",
"HostsPath": "/var/lib/docker/containers/9c6433fe31a5f3f6902c56c5bd7d128421ed56b1390bde1ec9713792cca2ef22/hosts",
"LogPath": "/var/lib/docker/containers/9c6433fe31a5f3f6902c56c5bd7d128421ed56b1390bde1ec9713792cca2ef22/9c6433fe31a5f3f6902c56c5bd7d128421ed56b1390bde1ec9713792cca2ef22-json.log",
"Name": "/offline-containerd-20220516230448-297512",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"offline-containerd-20220516230448-297512:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "offline-containerd-20220516230448-297512",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 2147483648,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4294967296,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/6c5a723a4dd66e7548624c5da223bbb8c988cac5047f0d5cd3d7c622e917e81d-init/diff:/var/lib/docker/overlay2/c3eb022d427af568329eafa4ae6ed1dc59dc2ac74a0ba5b3e01e4bbdb21eb553/diff:/var/lib/docker/overlay2/058b29735172499aabb65dbeaebc91f1742593c228648adffa4d076e4d88f98a/diff:/var/lib/docker/overlay2/fffb2ec6c71f1f24b68faf33ad3a0554baadaa7059d1c6984122f84f8a1d4fcd/diff:/var/lib/docker/overlay2/2a34262c434a3aa1f625f459193adb4ffaf6cd9a69c27800a60aa92d6dee915f/diff:/var/lib/docker/overlay2/c03b1820b8b218f775160c8eb72cd8fef46c27653b4032d425fbb31e052bfee3/diff:/var/lib/docker/overlay2/48ba7516feda15a99a4383b65506b12f77fe9113978ba1b0253f89c85595639c/diff:/var/lib/docker/overlay2/9e284018ec0ee88ced5656116ad9a4a064ee366cd119923e53903dd1a1fcaebe/diff:/var/lib/docker/overlay2/56b2b0f05fc35d3fdd19be897f53c20f99da6fdcf149827a90532dc1f022760d/diff:/var/lib/docker/overlay2/7c210d24c6059e86d86e830adcb93419ebf2b7a2cf4da2510c0f5801a581a32a/diff:/var/lib/docker/overlay2/887eab
f1869ff2bde91156cb4e93b8d9734db4fbfba04cc73e914a98c5f49493/diff:/var/lib/docker/overlay2/ef7a415563b50beaa96b9a0789453dd724cf2986f1148f08c593f81193282e65/diff:/var/lib/docker/overlay2/9bb29992e8f2abd74b4140d9789d1b20f7c7a0b1021a947a5562ef7c525683f1/diff:/var/lib/docker/overlay2/51d79bc59316d4e0ff1f4dfc5e66415203883de723675169541e4aa4af32c545/diff:/var/lib/docker/overlay2/d12b5a9757059b3cc44bc1f81d9e4fe00d3929cac84394da799503ab89fb405c/diff:/var/lib/docker/overlay2/b55f4166abc256ff24d4c859ea216a4129cf36ebe578b3cc679c126104361ce5/diff:/var/lib/docker/overlay2/b9a062ce162474b7e25a79af680ca71c598badf81aaed43320cf0051bfb855b2/diff:/var/lib/docker/overlay2/5ec732bd6887ea112f8367a4505cd778205cf283935e8276a6a87d2dbb0c81ca/diff:/var/lib/docker/overlay2/11e2779f507912ead5bb118604b0b05465fbec40ab769539c92cef37dab1fc93/diff:/var/lib/docker/overlay2/abf709964c6ea556afdc3ec94aaa7220dab1d52ca05b2af8052330a82e53919d/diff:/var/lib/docker/overlay2/be8bbe86c3bbf629d03b3ff3b3520cb1c61c3b2c2821840706a572aace5427f2/diff:/var/lib/d
ocker/overlay2/60d290f34bf77b03fc098079ee302ab12595733d6782291927186cc15f2f248e/diff:/var/lib/docker/overlay2/98c00a0a0a88508112f518e0748ef16808225fc214dc29b677ce4bdb32897dee/diff:/var/lib/docker/overlay2/dd2e0aa27c9cb49704edada84f0f315ea43d94c6dac5f844dbcbd3c788457b19/diff:/var/lib/docker/overlay2/3925cdd3defcd71c1f50093fe8c11bfebc302b2ca509846ef8580dba6f15ffb4/diff:/var/lib/docker/overlay2/eb3e06c1a9afe72caac2d8224621bd71c35224709ebee4f5914413e26a89faab/diff:/var/lib/docker/overlay2/a8850eac9038e34bac4f5c8fd8392e8e09eab787928a6ea07f80a0baa209ce8c/diff:/var/lib/docker/overlay2/afbad63d688846c0a7cb95af8b0e715b6b4ce909078171866e4c7ff952579190/diff:/var/lib/docker/overlay2/06e01d6cf093dc1baaedc3b6afbd14e54fea886af38ccacba4f195fead6a6baf/diff:/var/lib/docker/overlay2/726113f63ba886e8742fcf64fa67e61fd99bd3f700ae0e9d488fcc3ccaa1f030/diff:/var/lib/docker/overlay2/29fc73a1ffec554615f05155e74a38791309a8aab5241155fe20939ce2a9ed6a/diff:/var/lib/docker/overlay2/9826a1e4a63f30c6045f8ead5179511fc269c450650cb069a0aee9d2e3b
25085/diff:/var/lib/docker/overlay2/df2590096b262b86061e46d006c749548336b610190d0b63b71e5b0206698c16/diff:/var/lib/docker/overlay2/5d30205005e7f116c9aa426880c975fc59aeae336e5080f42b22404c65f1821d/diff:/var/lib/docker/overlay2/7d2e4c1caeb4439a4cee464158465b1ac05bf29f6eb39a4ab63f100eb7c83bbf/diff:/var/lib/docker/overlay2/bd4cc44a4cf327ad29fc760294ae325cabf7612bcbf950f9f3412c43a1aa92c0/diff:/var/lib/docker/overlay2/97ce49d4e138a4e240eefc69c9671157f6a18326e5f857d47a1263a15f521e7f/diff:/var/lib/docker/overlay2/f9a18e38c1e8fe3b42d657d41338803c4d0f3f95e06470fc36f78d1947f243bc/diff:/var/lib/docker/overlay2/c80529f742ab369a001fac7e240527249ba7e6fe77c4dcec5c15f372f2f16609/diff:/var/lib/docker/overlay2/36b36af89770f04c0e672d8a66975e93ed399bffcc555c29fac79a3fd2b2bcd4/diff:/var/lib/docker/overlay2/4a711f4aaa98aa2a49b57744915619c343ce393163106d9dc90d23fb9c1d2462/diff:/var/lib/docker/overlay2/ec2580a9be2850259775a11fe564bc34f54fc2490759b277646f282c69212044/diff:/var/lib/docker/overlay2/3bb37c27f66ee6515cb7a7cb2a1393f8358a83
e551f0b93b58183af3b32c942e/diff",
"MergedDir": "/var/lib/docker/overlay2/6c5a723a4dd66e7548624c5da223bbb8c988cac5047f0d5cd3d7c622e917e81d/merged",
"UpperDir": "/var/lib/docker/overlay2/6c5a723a4dd66e7548624c5da223bbb8c988cac5047f0d5cd3d7c622e917e81d/diff",
"WorkDir": "/var/lib/docker/overlay2/6c5a723a4dd66e7548624c5da223bbb8c988cac5047f0d5cd3d7c622e917e81d/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "offline-containerd-20220516230448-297512",
"Source": "/var/lib/docker/volumes/offline-containerd-20220516230448-297512/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "offline-containerd-20220516230448-297512",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "offline-containerd-20220516230448-297512",
"name.minikube.sigs.k8s.io": "offline-containerd-20220516230448-297512",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "8efc9ca55c0034b63c5f630615e1791a848fc12231391726739b7ddb8321d339",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49568"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49566"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49563"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49565"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49564"
}
]
},
"SandboxKey": "/var/run/docker/netns/8efc9ca55c00",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"offline-containerd-20220516230448-297512": {
"IPAMConfig": {
"IPv4Address": "192.168.58.2"
},
"Links": null,
"Aliases": [
"9c6433fe31a5",
"offline-containerd-20220516230448-297512"
],
"NetworkID": "80c024ca9e06533ba99ab6d06de456eb89c2dc864dad5a0fb9ce067e04a0387f",
"EndpointID": "1966fa08cb4ab3088c30fefc69c7ce253cdcba7fd30e08dc0e4470a7f677bb4b",
"Gateway": "192.168.58.1",
"IPAddress": "192.168.58.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:3a:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p offline-containerd-20220516230448-297512 -n offline-containerd-20220516230448-297512
=== CONT TestOffline
helpers_test.go:244: <<< TestOffline FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestOffline]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p offline-containerd-20220516230448-297512 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p offline-containerd-20220516230448-297512 logs -n 25: (1.257176673s)
helpers_test.go:252: TestOffline logs:
-- stdout --
*
* ==> Audit <==
* |---------|------------------------------------------|------------------------------------------|---------|----------------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|------------------------------------------|------------------------------------------|---------|----------------|---------------------|---------------------|
| delete | -p | flannel-20220516230552-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:05 UTC | 16 May 22 23:05 UTC |
| | flannel-20220516230552-297512 | | | | | |
| delete | -p false-20220516230552-297512 | false-20220516230552-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:05 UTC | 16 May 22 23:05 UTC |
| start | -p | NoKubernetes-20220516230448-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:05 UTC | 16 May 22 23:06 UTC |
| | NoKubernetes-20220516230448-297512 | | | | | |
| | --no-kubernetes --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | NoKubernetes-20220516230448-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:06 UTC | 16 May 22 23:06 UTC |
| | NoKubernetes-20220516230448-297512 | | | | | |
| start | -p | NoKubernetes-20220516230448-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:06 UTC | 16 May 22 23:06 UTC |
| | NoKubernetes-20220516230448-297512 | | | | | |
| | --no-kubernetes --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| profile | list | minikube | jenkins | v1.26.0-beta.0 | 16 May 22 23:06 UTC | 16 May 22 23:06 UTC |
| profile | list --output=json | minikube | jenkins | v1.26.0-beta.0 | 16 May 22 23:06 UTC | 16 May 22 23:06 UTC |
| stop | -p | NoKubernetes-20220516230448-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:06 UTC | 16 May 22 23:06 UTC |
| | NoKubernetes-20220516230448-297512 | | | | | |
| start | -p | NoKubernetes-20220516230448-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:06 UTC | 16 May 22 23:06 UTC |
| | NoKubernetes-20220516230448-297512 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | NoKubernetes-20220516230448-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:06 UTC | 16 May 22 23:06 UTC |
| | NoKubernetes-20220516230448-297512 | | | | | |
| start | -p | force-systemd-flag-20220516230557-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:05 UTC | 16 May 22 23:06 UTC |
| | force-systemd-flag-20220516230557-297512 | | | | | |
| | --memory=2048 --force-systemd | | | | | |
| | --alsologtostderr -v=5 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-flag-20220516230557-297512 | force-systemd-flag-20220516230557-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:06 UTC | 16 May 22 23:06 UTC |
| | ssh cat /etc/containerd/config.toml | | | | | |
| delete | -p | force-systemd-flag-20220516230557-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:06 UTC | 16 May 22 23:06 UTC |
| | force-systemd-flag-20220516230557-297512 | | | | | |
| start | -p | stopped-upgrade-20220516230622-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:07 UTC | 16 May 22 23:08 UTC |
| | stopped-upgrade-20220516230622-297512 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| logs | -p | stopped-upgrade-20220516230622-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:08 UTC | 16 May 22 23:08 UTC |
| | stopped-upgrade-20220516230622-297512 | | | | | |
| delete | -p | stopped-upgrade-20220516230622-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:08 UTC | 16 May 22 23:08 UTC |
| | stopped-upgrade-20220516230622-297512 | | | | | |
| start | -p | missing-upgrade-20220516230647-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:08 UTC | 16 May 22 23:08 UTC |
| | missing-upgrade-20220516230647-297512 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | missing-upgrade-20220516230647-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:08 UTC | 16 May 22 23:08 UTC |
| | missing-upgrade-20220516230647-297512 | | | | | |
| start | -p | kubernetes-upgrade-20220516230810-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:08 UTC | 16 May 22 23:08 UTC |
| | kubernetes-upgrade-20220516230810-297512 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --alsologtostderr -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| stop | -p | kubernetes-upgrade-20220516230810-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:08 UTC | 16 May 22 23:08 UTC |
| | kubernetes-upgrade-20220516230810-297512 | | | | | |
| start | -p | cert-expiration-20220516230448-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:08 UTC | 16 May 22 23:09 UTC |
| | cert-expiration-20220516230448-297512 | | | | | |
| | --memory=2048 --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | cert-expiration-20220516230448-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:09 UTC | 16 May 22 23:09 UTC |
| | cert-expiration-20220516230448-297512 | | | | | |
| start | -p | cert-options-20220516230904-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:09 UTC | 16 May 22 23:09 UTC |
| | cert-options-20220516230904-297512 | | | | | |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-20220516230904-297512 | cert-options-20220516230904-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:09 UTC | 16 May 22 23:09 UTC |
| | ssh openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p | cert-options-20220516230904-297512 | jenkins | v1.26.0-beta.0 | 16 May 22 23:09 UTC | 16 May 22 23:09 UTC |
| | cert-options-20220516230904-297512 | | | | | |
| | -- sudo cat | | | | | |
| | /etc/kubernetes/admin.conf | | | | | |
|---------|------------------------------------------|------------------------------------------|---------|----------------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/05/16 23:09:05
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.18.2 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0516 23:09:05.036794 462620 out.go:296] Setting OutFile to fd 1 ...
I0516 23:09:05.036889 462620 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0516 23:09:05.036894 462620 out.go:309] Setting ErrFile to fd 2...
I0516 23:09:05.036898 462620 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0516 23:09:05.037013 462620 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/bin
I0516 23:09:05.037328 462620 out.go:303] Setting JSON to false
I0516 23:09:05.038746 462620 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":13895,"bootTime":1652728650,"procs":608,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0516 23:09:05.038821 462620 start.go:125] virtualization: kvm guest
I0516 23:09:05.041514 462620 out.go:177] * [cert-options-20220516230904-297512] minikube v1.26.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
I0516 23:09:05.042961 462620 out.go:177] - MINIKUBE_LOCATION=12739
I0516 23:09:05.042969 462620 notify.go:193] Checking for updates...
I0516 23:09:05.044582 462620 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0516 23:09:05.046044 462620 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/kubeconfig
I0516 23:09:05.047403 462620 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube
I0516 23:09:05.048849 462620 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0516 23:09:05.050590 462620 config.go:178] Loaded profile config "kubernetes-upgrade-20220516230810-297512": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
I0516 23:09:05.050683 462620 config.go:178] Loaded profile config "offline-containerd-20220516230448-297512": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
I0516 23:09:05.050754 462620 config.go:178] Loaded profile config "running-upgrade-20220516230850-297512": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0516 23:09:05.050789 462620 driver.go:358] Setting default libvirt URI to qemu:///system
I0516 23:09:05.091384 462620 docker.go:137] docker version: linux-20.10.16
I0516 23:09:05.091481 462620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0516 23:09:05.190678 462620 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:57 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:true NGoroutines:59 SystemTime:2022-05-16 23:09:05.120937358 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0516 23:09:05.190759 462620 docker.go:254] overlay module found
I0516 23:09:05.192999 462620 out.go:177] * Using the docker driver based on user configuration
I0516 23:09:05.194283 462620 start.go:284] selected driver: docker
I0516 23:09:05.194294 462620 start.go:806] validating driver "docker" against <nil>
I0516 23:09:05.194326 462620 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0516 23:09:05.195290 462620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0516 23:09:05.292405 462620 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:57 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:true NGoroutines:59 SystemTime:2022-05-16 23:09:05.222779962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662791680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0516 23:09:05.292563 462620 start_flags.go:292] no existing cluster config was found, will generate one from the flags
I0516 23:09:05.292785 462620 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
I0516 23:09:05.294614 462620 out.go:177] * Using Docker driver with the root privilege
I0516 23:09:05.295893 462620 cni.go:95] Creating CNI manager for ""
I0516 23:09:05.295903 462620 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0516 23:09:05.295916 462620 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I0516 23:09:05.295921 462620 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I0516 23:09:05.295928 462620 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
I0516 23:09:05.295939 462620 start_flags.go:306] config:
{Name:cert-options-20220516230904-297512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cert-options-20220516230904-297512 Namespace:default APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1
192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0516 23:09:05.297535 462620 out.go:177] * Starting control plane node cert-options-20220516230904-297512 in cluster cert-options-20220516230904-297512
I0516 23:09:05.298728 462620 cache.go:120] Beginning downloading kic base image for docker with containerd
I0516 23:09:05.299954 462620 out.go:177] * Pulling base image ...
I0516 23:09:05.301202 462620 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
I0516 23:09:05.301227 462620 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
I0516 23:09:05.301240 462620 cache.go:57] Caching tarball of preloaded images
I0516 23:09:05.301308 462620 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon
I0516 23:09:05.301446 462620 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0516 23:09:05.301459 462620 cache.go:60] Finished verifying existence of preloaded tar for v1.23.6 on containerd
I0516 23:09:05.301552 462620 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/config.json ...
I0516 23:09:05.301568 462620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/config.json: {Name:mk46f2505386c7c8f47dd9c3c64a228d4604e70a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0516 23:09:05.342883 462620 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c in local docker daemon, skipping pull
I0516 23:09:05.342898 462620 cache.go:141] gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c exists in daemon, skipping load
I0516 23:09:05.342910 462620 cache.go:206] Successfully downloaded all kic artifacts
I0516 23:09:05.342945 462620 start.go:352] acquiring machines lock for cert-options-20220516230904-297512: {Name:mk19af2d3f4b4a7b1c5a5760149f07b13994409f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0516 23:09:05.343045 462620 start.go:356] acquired machines lock for "cert-options-20220516230904-297512" in 88.69µs
I0516 23:09:05.343064 462620 start.go:91] Provisioning new machine with config: &{Name:cert-options-20220516230904-297512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cert-options-20220516230904-297512 Namespace:default APIServe
rName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8555 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0516 23:09:05.343124 462620 start.go:131] createHost starting for "" (driver="docker")
I0516 23:09:03.339914 459592 ssh_runner.go:195] Run: cat /etc/os-release
I0516 23:09:03.530500 459592 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0516 23:09:03.530544 459592 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0516 23:09:03.530572 459592 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0516 23:09:03.530583 459592 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0516 23:09:03.530598 459592 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/addons for local assets ...
I0516 23:09:03.530666 459592 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/files for local assets ...
I0516 23:09:03.530759 459592 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/files/etc/ssl/certs/2975122.pem -> 2975122.pem in /etc/ssl/certs
I0516 23:09:03.530875 459592 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0516 23:09:03.541414 459592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/files/etc/ssl/certs/2975122.pem --> /etc/ssl/certs/2975122.pem (1708 bytes)
I0516 23:09:03.567533 459592 start.go:309] post-start completed in 416.464747ms
I0516 23:09:03.567624 459592 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0516 23:09:03.567675 459592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516230810-297512
I0516 23:09:03.605536 459592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49623 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/kubernetes-upgrade-20220516230810-297512/id_rsa Username:docker}
I0516 23:09:03.693178 459592 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0516 23:09:03.696846 459592 fix.go:57] fixHost completed within 5.058329621s
I0516 23:09:03.696866 459592 start.go:81] releasing machines lock for "kubernetes-upgrade-20220516230810-297512", held for 5.058371246s
I0516 23:09:03.696936 459592 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220516230810-297512
I0516 23:09:03.727485 459592 ssh_runner.go:195] Run: systemctl --version
I0516 23:09:03.727535 459592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516230810-297512
I0516 23:09:03.727543 459592 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0516 23:09:03.727611 459592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220516230810-297512
I0516 23:09:03.761044 459592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49623 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/kubernetes-upgrade-20220516230810-297512/id_rsa Username:docker}
I0516 23:09:03.761787 459592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49623 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/kubernetes-upgrade-20220516230810-297512/id_rsa Username:docker}
I0516 23:09:03.869439 459592 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0516 23:09:03.884158 459592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0516 23:09:03.895820 459592 docker.go:187] disabling docker service ...
I0516 23:09:03.895877 459592 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0516 23:09:03.907703 459592 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0516 23:09:03.919088 459592 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0516 23:09:04.021732 459592 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0516 23:09:04.104290 459592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0516 23:09:04.113690 459592 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0516 23:09:04.126258 459592 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
I0516 23:09:04.139500 459592 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0516 23:09:04.145782 459592 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0516 23:09:04.153084 459592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0516 23:09:04.246134 459592 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0516 23:09:04.324130 459592 start.go:456] Will wait 60s for socket path /run/containerd/containerd.sock
I0516 23:09:04.324200 459592 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0516 23:09:04.327997 459592 start.go:477] Will wait 60s for crictl version
I0516 23:09:04.328051 459592 ssh_runner.go:195] Run: sudo crictl version
I0516 23:09:04.356711 459592 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:
stderr:
time="2022-05-16T23:09:04Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
I0516 23:09:05.424851 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:07.425088 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:05.345974 462620 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
I0516 23:09:05.346194 462620 start.go:165] libmachine.API.Create for "cert-options-20220516230904-297512" (driver="docker")
I0516 23:09:05.346214 462620 client.go:168] LocalClient.Create starting
I0516 23:09:05.346292 462620 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/ca.pem
I0516 23:09:05.346317 462620 main.go:134] libmachine: Decoding PEM data...
I0516 23:09:05.346328 462620 main.go:134] libmachine: Parsing certificate...
I0516 23:09:05.346391 462620 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/cert.pem
I0516 23:09:05.346402 462620 main.go:134] libmachine: Decoding PEM data...
I0516 23:09:05.346410 462620 main.go:134] libmachine: Parsing certificate...
I0516 23:09:05.346716 462620 cli_runner.go:164] Run: docker network inspect cert-options-20220516230904-297512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0516 23:09:05.374913 462620 cli_runner.go:211] docker network inspect cert-options-20220516230904-297512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0516 23:09:05.374973 462620 network_create.go:272] running [docker network inspect cert-options-20220516230904-297512] to gather additional debugging logs...
I0516 23:09:05.374988 462620 cli_runner.go:164] Run: docker network inspect cert-options-20220516230904-297512
W0516 23:09:05.403392 462620 cli_runner.go:211] docker network inspect cert-options-20220516230904-297512 returned with exit code 1
I0516 23:09:05.403411 462620 network_create.go:275] error running [docker network inspect cert-options-20220516230904-297512]: docker network inspect cert-options-20220516230904-297512: exit status 1
stdout:
[]
stderr:
Error: No such network: cert-options-20220516230904-297512
I0516 23:09:05.403432 462620 network_create.go:277] output of [docker network inspect cert-options-20220516230904-297512]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: cert-options-20220516230904-297512
** /stderr **
I0516 23:09:05.403475 462620 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0516 23:09:05.433534 462620 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00073c0c8] misses:0}
I0516 23:09:05.433568 462620 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0516 23:09:05.433586 462620 network_create.go:115] attempt to create docker network cert-options-20220516230904-297512 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0516 23:09:05.433630 462620 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220516230904-297512
I0516 23:09:05.495848 462620 network_create.go:99] docker network cert-options-20220516230904-297512 192.168.49.0/24 created
I0516 23:09:05.495877 462620 kic.go:106] calculated static IP "192.168.49.2" for the "cert-options-20220516230904-297512" container
I0516 23:09:05.495928 462620 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0516 23:09:05.526725 462620 cli_runner.go:164] Run: docker volume create cert-options-20220516230904-297512 --label name.minikube.sigs.k8s.io=cert-options-20220516230904-297512 --label created_by.minikube.sigs.k8s.io=true
I0516 23:09:05.555824 462620 oci.go:103] Successfully created a docker volume cert-options-20220516230904-297512
I0516 23:09:05.555899 462620 cli_runner.go:164] Run: docker run --rm --name cert-options-20220516230904-297512-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20220516230904-297512 --entrypoint /usr/bin/test -v cert-options-20220516230904-297512:/var gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c -d /var/lib
I0516 23:09:06.561280 462620 cli_runner.go:217] Completed: docker run --rm --name cert-options-20220516230904-297512-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20220516230904-297512 --entrypoint /usr/bin/test -v cert-options-20220516230904-297512:/var gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c -d /var/lib: (1.005325551s)
I0516 23:09:06.561300 462620 oci.go:107] Successfully prepared a docker volume cert-options-20220516230904-297512
I0516 23:09:06.561334 462620 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
I0516 23:09:06.561357 462620 kic.go:179] Starting extracting preloaded images to volume ...
I0516 23:09:06.561424 462620 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-options-20220516230904-297512:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c -I lz4 -xf /preloaded.tar -C /extractDir
I0516 23:09:09.924106 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:12.424039 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:15.404659 459592 ssh_runner.go:195] Run: sudo crictl version
I0516 23:09:15.432795 459592 start.go:486] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.6.4
RuntimeApiVersion: v1alpha2
I0516 23:09:15.432862 459592 ssh_runner.go:195] Run: containerd --version
I0516 23:09:15.465159 459592 ssh_runner.go:195] Run: containerd --version
I0516 23:09:15.494464 459592 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
I0516 23:09:15.495827 459592 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220516230810-297512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0516 23:09:15.530434 459592 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I0516 23:09:15.534466 459592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0516 23:09:15.545592 459592 out.go:177] - kubelet.cni-conf-dir=/etc/cni/net.mk
I0516 23:09:15.546802 459592 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
I0516 23:09:15.546859 459592 ssh_runner.go:195] Run: sudo crictl images --output json
I0516 23:09:15.571071 459592 containerd.go:603] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.23.6". assuming images are not preloaded.
I0516 23:09:15.571137 459592 ssh_runner.go:195] Run: which lz4
I0516 23:09:15.574089 459592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0516 23:09:15.576870 459592 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I0516 23:09:15.576894 459592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (490512946 bytes)
I0516 23:09:16.692787 459592 containerd.go:550] Took 1.118728 seconds to copy over tarball
I0516 23:09:16.692860 459592 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0516 23:09:14.424833 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:16.425871 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:15.266553 462620 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-options-20220516230904-297512:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c -I lz4 -xf /preloaded.tar -C /extractDir: (8.705063019s)
I0516 23:09:15.266578 462620 kic.go:188] duration metric: took 8.705216 seconds to extract preloaded images to volume
W0516 23:09:15.266692 462620 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0516 23:09:15.266802 462620 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0516 23:09:15.372789 462620 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-options-20220516230904-297512 --name cert-options-20220516230904-297512 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20220516230904-297512 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-options-20220516230904-297512 --network cert-options-20220516230904-297512 --ip 192.168.49.2 --volume cert-options-20220516230904-297512:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8555 --publish=127.0.0.1::8555 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c
I0516 23:09:15.824812 462620 cli_runner.go:164] Run: docker container inspect cert-options-20220516230904-297512 --format={{.State.Running}}
I0516 23:09:15.877512 462620 cli_runner.go:164] Run: docker container inspect cert-options-20220516230904-297512 --format={{.State.Status}}
I0516 23:09:15.947550 462620 cli_runner.go:164] Run: docker exec cert-options-20220516230904-297512 stat /var/lib/dpkg/alternatives/iptables
I0516 23:09:16.035859 462620 oci.go:144] the created container "cert-options-20220516230904-297512" has a running status.
I0516 23:09:16.035900 462620 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/cert-options-20220516230904-297512/id_rsa...
I0516 23:09:16.131697 462620 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/cert-options-20220516230904-297512/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0516 23:09:16.249205 462620 cli_runner.go:164] Run: docker container inspect cert-options-20220516230904-297512 --format={{.State.Status}}
I0516 23:09:16.303454 462620 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0516 23:09:16.303474 462620 kic_runner.go:114] Args: [docker exec --privileged cert-options-20220516230904-297512 chown docker:docker /home/docker/.ssh/authorized_keys]
I0516 23:09:16.415740 462620 cli_runner.go:164] Run: docker container inspect cert-options-20220516230904-297512 --format={{.State.Status}}
I0516 23:09:16.476320 462620 machine.go:88] provisioning docker machine ...
I0516 23:09:16.476352 462620 ubuntu.go:169] provisioning hostname "cert-options-20220516230904-297512"
I0516 23:09:16.476411 462620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20220516230904-297512
I0516 23:09:16.521176 462620 main.go:134] libmachine: Using SSH client type: native
I0516 23:09:16.521396 462620 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil> [] 0s} 127.0.0.1 49632 <nil> <nil>}
I0516 23:09:16.521427 462620 main.go:134] libmachine: About to run SSH command:
sudo hostname cert-options-20220516230904-297512 && echo "cert-options-20220516230904-297512" | sudo tee /etc/hostname
I0516 23:09:16.710949 462620 main.go:134] libmachine: SSH cmd err, output: <nil>: cert-options-20220516230904-297512
I0516 23:09:16.711033 462620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20220516230904-297512
I0516 23:09:16.745083 462620 main.go:134] libmachine: Using SSH client type: native
I0516 23:09:16.745238 462620 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil> [] 0s} 127.0.0.1 49632 <nil> <nil>}
I0516 23:09:16.745254 462620 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\scert-options-20220516230904-297512' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-20220516230904-297512/g' /etc/hosts;
else
echo '127.0.1.1 cert-options-20220516230904-297512' | sudo tee -a /etc/hosts;
fi
fi
I0516 23:09:16.877046 462620 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0516 23:09:16.877079 462620 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube}
I0516 23:09:16.877103 462620 ubuntu.go:177] setting up certificates
I0516 23:09:16.877113 462620 provision.go:83] configureAuth start
I0516 23:09:16.877180 462620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-options-20220516230904-297512
I0516 23:09:16.910549 462620 provision.go:138] copyHostCerts
I0516 23:09:16.910595 462620 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/ca.pem, removing ...
I0516 23:09:16.910601 462620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/ca.pem
I0516 23:09:16.910659 462620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/ca.pem (1082 bytes)
I0516 23:09:16.910768 462620 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cert.pem, removing ...
I0516 23:09:16.910775 462620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cert.pem
I0516 23:09:16.910800 462620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cert.pem (1123 bytes)
I0516 23:09:16.910852 462620 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/key.pem, removing ...
I0516 23:09:16.910855 462620 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/key.pem
I0516 23:09:16.910874 462620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/key.pem (1679 bytes)
I0516 23:09:16.910919 462620 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/ca-key.pem org=jenkins.cert-options-20220516230904-297512 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube cert-options-20220516230904-297512]
I0516 23:09:17.037430 462620 provision.go:172] copyRemoteCerts
I0516 23:09:17.037483 462620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0516 23:09:17.037529 462620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20220516230904-297512
I0516 23:09:17.071452 462620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49632 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/cert-options-20220516230904-297512/id_rsa Username:docker}
I0516 23:09:17.168901 462620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
I0516 23:09:17.187023 462620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0516 23:09:17.205203 462620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0516 23:09:17.222390 462620 provision.go:86] duration metric: configureAuth took 345.26647ms
I0516 23:09:17.222406 462620 ubuntu.go:193] setting minikube options for container-runtime
I0516 23:09:17.222569 462620 config.go:178] Loaded profile config "cert-options-20220516230904-297512": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
I0516 23:09:17.222576 462620 machine.go:91] provisioned docker machine in 746.242386ms
I0516 23:09:17.222580 462620 client.go:171] LocalClient.Create took 11.876362689s
I0516 23:09:17.222600 462620 start.go:173] duration metric: libmachine.API.Create for "cert-options-20220516230904-297512" took 11.876402396s
I0516 23:09:17.222606 462620 start.go:306] post-start starting for "cert-options-20220516230904-297512" (driver="docker")
I0516 23:09:17.222610 462620 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0516 23:09:17.222646 462620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0516 23:09:17.222677 462620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20220516230904-297512
I0516 23:09:17.261165 462620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49632 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/cert-options-20220516230904-297512/id_rsa Username:docker}
I0516 23:09:17.352626 462620 ssh_runner.go:195] Run: cat /etc/os-release
I0516 23:09:17.355479 462620 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0516 23:09:17.355499 462620 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0516 23:09:17.355514 462620 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0516 23:09:17.355520 462620 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0516 23:09:17.355531 462620 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/addons for local assets ...
I0516 23:09:17.355583 462620 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/files for local assets ...
I0516 23:09:17.355663 462620 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/files/etc/ssl/certs/2975122.pem -> 2975122.pem in /etc/ssl/certs
I0516 23:09:17.355756 462620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0516 23:09:17.362195 462620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/files/etc/ssl/certs/2975122.pem --> /etc/ssl/certs/2975122.pem (1708 bytes)
I0516 23:09:17.378723 462620 start.go:309] post-start completed in 156.103942ms
I0516 23:09:17.421350 462620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-options-20220516230904-297512
I0516 23:09:17.457751 462620 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/config.json ...
I0516 23:09:17.519084 462620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0516 23:09:17.519161 462620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20220516230904-297512
I0516 23:09:17.551788 462620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49632 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/cert-options-20220516230904-297512/id_rsa Username:docker}
I0516 23:09:17.645736 462620 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0516 23:09:17.649554 462620 start.go:134] duration metric: createHost completed in 12.306418081s
I0516 23:09:17.649572 462620 start.go:81] releasing machines lock for "cert-options-20220516230904-297512", held for 12.306519061s
I0516 23:09:17.649667 462620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-options-20220516230904-297512
I0516 23:09:17.682876 462620 ssh_runner.go:195] Run: systemctl --version
I0516 23:09:17.682915 462620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20220516230904-297512
I0516 23:09:17.682962 462620 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0516 23:09:17.683031 462620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20220516230904-297512
I0516 23:09:17.718295 462620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49632 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/cert-options-20220516230904-297512/id_rsa Username:docker}
I0516 23:09:17.720271 462620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49632 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/cert-options-20220516230904-297512/id_rsa Username:docker}
I0516 23:09:17.809201 462620 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0516 23:09:17.828638 462620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0516 23:09:17.837881 462620 docker.go:187] disabling docker service ...
I0516 23:09:17.837919 462620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0516 23:09:19.257454 459592 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.564560085s)
I0516 23:09:19.257487 459592 containerd.go:557] Took 2.564668 seconds t extract the tarball
I0516 23:09:19.257500 459592 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0516 23:09:19.317171 459592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0516 23:09:19.391329 459592 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0516 23:09:19.475909 459592 ssh_runner.go:195] Run: sudo crictl images --output json
I0516 23:09:19.501708 459592 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.23.6 k8s.gcr.io/kube-controller-manager:v1.23.6 k8s.gcr.io/kube-scheduler:v1.23.6 k8s.gcr.io/kube-proxy:v1.23.6 k8s.gcr.io/pause:3.6 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
I0516 23:09:19.501798 459592 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0516 23:09:19.501827 459592 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.23.6
I0516 23:09:19.501853 459592 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.23.6
I0516 23:09:19.501870 459592 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.1-0
I0516 23:09:19.501871 459592 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.23.6
I0516 23:09:19.501947 459592 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
I0516 23:09:19.501845 459592 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.23.6
I0516 23:09:19.502048 459592 image.go:134] retrieving image: k8s.gcr.io/pause:3.6
I0516 23:09:19.509199 459592 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.23.6: Error response from daemon: reference does not exist
I0516 23:09:19.509239 459592 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.23.6: Error response from daemon: reference does not exist
I0516 23:09:19.509247 459592 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.23.6: Error response from daemon: reference does not exist
I0516 23:09:19.509284 459592 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.23.6: Error response from daemon: reference does not exist
I0516 23:09:19.515446 459592 image.go:176] found k8s.gcr.io/pause:3.6 locally: &{UncompressedImageCore:0xc0013260d0 lock:{state:0 sema:0} manifest:<nil>}
I0516 23:09:19.515834 459592 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.6"
I0516 23:09:19.725677 459592 cache_images.go:116] "k8s.gcr.io/pause:3.6" needs transfer: "k8s.gcr.io/pause:3.6" does not exist at hash "6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee" in container runtime
I0516 23:09:19.725732 459592 cri.go:216] Removing image: k8s.gcr.io/pause:3.6
I0516 23:09:19.725770 459592 ssh_runner.go:195] Run: which crictl
I0516 23:09:19.731549 459592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.6
I0516 23:09:19.824106 459592 image.go:176] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{UncompressedImageCore:0xc000796238 lock:{state:0 sema:0} manifest:<nil>}
I0516 23:09:19.824172 459592 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
I0516 23:09:19.835850 459592 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.23.6"
I0516 23:09:19.841750 459592 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.23.6"
I0516 23:09:19.862542 459592 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.23.6"
I0516 23:09:19.864633 459592 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.23.6"
I0516 23:09:19.976729 459592 image.go:176] found k8s.gcr.io/coredns/coredns:v1.8.6 locally: &{UncompressedImageCore:0xc0000101e8 lock:{state:0 sema:0} manifest:<nil>}
I0516 23:09:19.976831 459592 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
I0516 23:09:21.591652 459592 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.6: (1.860049036s)
I0516 23:09:21.591716 459592 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6
I0516 23:09:21.591739 459592 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5": (1.767542259s)
I0516 23:09:21.591789 459592 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
I0516 23:09:21.591800 459592 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.23.6": (1.755910336s)
I0516 23:09:21.591827 459592 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I0516 23:09:21.591822 459592 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.23.6" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.23.6" does not exist at hash "df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657" in container runtime
I0516 23:09:21.591854 459592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.6
I0516 23:09:21.591871 459592 ssh_runner.go:195] Run: which crictl
I0516 23:09:21.591870 459592 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.23.6
I0516 23:09:21.591893 459592 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.23.6": (1.750046862s)
I0516 23:09:21.591940 459592 ssh_runner.go:195] Run: which crictl
I0516 23:09:21.591964 459592 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.23.6" needs transfer: "k8s.gcr.io/kube-proxy:v1.23.6" does not exist at hash "4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47" in container runtime
I0516 23:09:21.592005 459592 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.23.6
I0516 23:09:21.592022 459592 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.23.6": (1.727357281s)
I0516 23:09:21.591974 459592 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.23.6": (1.729400375s)
I0516 23:09:21.592043 459592 ssh_runner.go:195] Run: which crictl
I0516 23:09:21.592046 459592 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.23.6" needs transfer: "k8s.gcr.io/kube-apiserver:v1.23.6" does not exist at hash "8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6" in container runtime
I0516 23:09:21.592057 459592 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.23.6" needs transfer: "k8s.gcr.io/kube-scheduler:v1.23.6" does not exist at hash "595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0" in container runtime
I0516 23:09:21.592073 459592 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.23.6
I0516 23:09:21.592105 459592 ssh_runner.go:195] Run: which crictl
I0516 23:09:21.592071 459592 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.23.6
I0516 23:09:21.592148 459592 ssh_runner.go:195] Run: which crictl
I0516 23:09:21.592150 459592 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6": (1.615301897s)
I0516 23:09:21.592224 459592 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
I0516 23:09:21.592267 459592 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
I0516 23:09:21.592312 459592 ssh_runner.go:195] Run: which crictl
I0516 23:09:21.596933 459592 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.6: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/pause_3.6': No such file or directory
I0516 23:09:21.596962 459592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 --> /var/lib/minikube/images/pause_3.6 (301056 bytes)
I0516 23:09:21.599854 459592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.23.6
I0516 23:09:21.599899 459592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I0516 23:09:21.599953 459592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.23.6
I0516 23:09:21.599963 459592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.23.6
I0516 23:09:21.599992 459592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.23.6
I0516 23:09:21.600033 459592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
I0516 23:09:21.637846 459592 containerd.go:287] Loading image: /var/lib/minikube/images/pause_3.6
I0516 23:09:21.637940 459592 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.6
I0516 23:09:21.887833 459592 image.go:176] found k8s.gcr.io/etcd:3.5.1-0 locally: &{UncompressedImageCore:0xc000641558 lock:{state:0 sema:0} manifest:<nil>}
I0516 23:09:21.887915 459592 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.1-0"
I0516 23:09:22.634081 459592 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.23.6: (1.034092276s)
I0516 23:09:22.634157 459592 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.23.6: (1.034281887s)
I0516 23:09:22.634179 459592 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6
I0516 23:09:22.634163 459592 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6
I0516 23:09:22.634230 459592 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6: (1.034175393s)
I0516 23:09:22.634239 459592 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
I0516 23:09:22.634139 459592 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.034204546s)
I0516 23:09:22.634277 459592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.23.6
I0516 23:09:22.634282 459592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.23.6
I0516 23:09:22.634283 459592 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
I0516 23:09:22.634293 459592 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.23.6: (1.03431335s)
I0516 23:09:22.634310 459592 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6
I0516 23:09:22.634310 459592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
I0516 23:09:22.634086 459592 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.23.6: (1.034067208s)
I0516 23:09:22.634362 459592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
I0516 23:09:22.634370 459592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.23.6
I0516 23:09:22.634367 459592 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6
I0516 23:09:22.634451 459592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.23.6
I0516 23:09:22.640107 459592 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.6: (1.002123018s)
I0516 23:09:22.640134 459592 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 from cache
I0516 23:09:22.640377 459592 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.23.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.23.6: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.23.6': No such file or directory
I0516 23:09:22.640403 459592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6 --> /var/lib/minikube/images/kube-apiserver_v1.23.6 (32604160 bytes)
I0516 23:09:22.640384 459592 cache_images.go:116] "k8s.gcr.io/etcd:3.5.1-0" needs transfer: "k8s.gcr.io/etcd:3.5.1-0" does not exist at hash "25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d" in container runtime
I0516 23:09:22.640456 459592 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
I0516 23:09:22.640477 459592 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.1-0
I0516 23:09:22.640484 459592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
I0516 23:09:22.640503 459592 ssh_runner.go:195] Run: which crictl
I0516 23:09:22.640545 459592 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.23.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.23.6: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.23.6': No such file or directory
I0516 23:09:22.640566 459592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6 --> /var/lib/minikube/images/kube-proxy_v1.23.6 (39280128 bytes)
I0516 23:09:22.640572 459592 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.23.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.23.6: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.23.6': No such file or directory
I0516 23:09:22.640587 459592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6 --> /var/lib/minikube/images/kube-scheduler_v1.23.6 (15136768 bytes)
I0516 23:09:22.640626 459592 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
I0516 23:09:22.640649 459592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (13586432 bytes)
I0516 23:09:22.645552 459592 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.23.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.23.6: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.23.6': No such file or directory
I0516 23:09:22.645792 459592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6 --> /var/lib/minikube/images/kube-controller-manager_v1.23.6 (30176256 bytes)
I0516 23:09:22.645729 459592 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.1-0
I0516 23:09:22.781072 459592 containerd.go:287] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I0516 23:09:22.781142 459592 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
I0516 23:09:18.925000 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:21.236653 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:23.425333 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:23.833601 462620 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (5.995653581s)
I0516 23:09:23.833659 462620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0516 23:09:23.843583 462620 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0516 23:09:23.940046 462620 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0516 23:09:24.029148 462620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0516 23:09:24.039181 462620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0516 23:09:24.052239 462620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
I0516 23:09:24.067401 462620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0516 23:09:24.074445 462620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0516 23:09:24.080947 462620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0516 23:09:24.164578 462620 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0516 23:09:24.238001 462620 start.go:456] Will wait 60s for socket path /run/containerd/containerd.sock
I0516 23:09:24.238077 462620 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0516 23:09:24.242833 462620 start.go:477] Will wait 60s for crictl version
I0516 23:09:24.242885 462620 ssh_runner.go:195] Run: sudo crictl version
I0516 23:09:24.293154 462620 start.go:486] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.6.4
RuntimeApiVersion: v1alpha2
I0516 23:09:24.293207 462620 ssh_runner.go:195] Run: containerd --version
I0516 23:09:24.331538 462620 ssh_runner.go:195] Run: containerd --version
I0516 23:09:24.386516 462620 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
I0516 23:09:24.388424 462620 cli_runner.go:164] Run: docker network inspect cert-options-20220516230904-297512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0516 23:09:24.431786 462620 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0516 23:09:24.437013 462620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0516 23:09:24.453179 462620 out.go:177] - kubelet.cni-conf-dir=/etc/cni/net.mk
I0516 23:09:24.454874 462620 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
I0516 23:09:24.454938 462620 ssh_runner.go:195] Run: sudo crictl images --output json
I0516 23:09:24.492904 462620 containerd.go:607] all images are preloaded for containerd runtime.
I0516 23:09:24.492919 462620 containerd.go:521] Images already preloaded, skipping extraction
I0516 23:09:24.492974 462620 ssh_runner.go:195] Run: sudo crictl images --output json
I0516 23:09:24.517901 462620 containerd.go:607] all images are preloaded for containerd runtime.
I0516 23:09:24.517912 462620 cache_images.go:84] Images are preloaded, skipping loading
I0516 23:09:24.517954 462620 ssh_runner.go:195] Run: sudo crictl info
I0516 23:09:24.547048 462620 cni.go:95] Creating CNI manager for ""
I0516 23:09:24.547058 462620 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0516 23:09:24.547072 462620 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0516 23:09:24.547084 462620 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8555 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-options-20220516230904-297512 NodeName:cert-options-20220516230904-297512 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs C
lientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0516 23:09:24.547217 462620 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8555
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "cert-options-20220516230904-297512"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8555
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.23.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0516 23:09:24.547296 462620 kubeadm.go:936] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=cert-options-20220516230904-297512 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.23.6 ClusterName:cert-options-20220516230904-297512 Namespace:default APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:}
I0516 23:09:24.547338 462620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
I0516 23:09:24.554641 462620 binaries.go:44] Found k8s binaries, skipping transfer
I0516 23:09:24.554703 462620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0516 23:09:24.561420 462620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (579 bytes)
I0516 23:09:24.575118 462620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0516 23:09:24.590561 462620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
I0516 23:09:24.603791 462620 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0516 23:09:24.606917 462620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0516 23:09:24.616554 462620 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512 for IP: 192.168.49.2
I0516 23:09:24.616663 462620 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/ca.key
I0516 23:09:24.616704 462620 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/proxy-client-ca.key
I0516 23:09:24.616763 462620 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/client.key
I0516 23:09:24.616776 462620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/client.crt with IP's: []
I0516 23:09:24.817710 462620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/client.crt ...
I0516 23:09:24.817725 462620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/client.crt: {Name:mkb7743287faa5eb29d75cf7deb1cd1f7fbc1dec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0516 23:09:24.817912 462620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/client.key ...
I0516 23:09:24.817919 462620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/client.key: {Name:mkd5ace2bb94c494b582bee41176a94556e758f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0516 23:09:24.818004 462620 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/apiserver.key.eb39f9d8
I0516 23:09:24.818013 462620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/apiserver.crt.eb39f9d8 with IP's: [127.0.0.1 192.168.15.15 192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0516 23:09:23.942802 459592 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.1-0: (1.296935694s)
I0516 23:09:23.942835 459592 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0
I0516 23:09:23.942915 459592 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.1-0
I0516 23:09:24.215522 459592 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5: (1.434337922s)
I0516 23:09:24.215558 459592 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I0516 23:09:24.215589 459592 containerd.go:287] Loading image: /var/lib/minikube/images/coredns_v1.8.6
I0516 23:09:24.215615 459592 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.1-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.1-0: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/etcd_3.5.1-0': No such file or directory
I0516 23:09:24.215635 459592 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
I0516 23:09:24.215643 459592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 --> /var/lib/minikube/images/etcd_3.5.1-0 (98891776 bytes)
I0516 23:09:24.999485 459592 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
I0516 23:09:24.999528 459592 containerd.go:287] Loading image: /var/lib/minikube/images/kube-scheduler_v1.23.6
I0516 23:09:24.999578 459592 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.23.6
I0516 23:09:26.083014 459592 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.23.6: (1.083401677s)
I0516 23:09:26.083044 459592 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6 from cache
I0516 23:09:26.083062 459592 containerd.go:287] Loading image: /var/lib/minikube/images/kube-apiserver_v1.23.6
I0516 23:09:26.083100 459592 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.23.6
I0516 23:09:27.555143 459592 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.23.6: (1.47200605s)
I0516 23:09:27.555180 459592 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6 from cache
I0516 23:09:27.555210 459592 containerd.go:287] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.23.6
I0516 23:09:27.555251 459592 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.23.6
I0516 23:09:25.924970 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:27.925015 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:25.063536 462620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/apiserver.crt.eb39f9d8 ...
I0516 23:09:25.063563 462620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/apiserver.crt.eb39f9d8: {Name:mk7d5580994046146d4af54b0ff3e11756c16915 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0516 23:09:25.063777 462620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/apiserver.key.eb39f9d8 ...
I0516 23:09:25.063787 462620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/apiserver.key.eb39f9d8: {Name:mk7332c44ebfe807eb9e598d45d69704e185a00f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0516 23:09:25.063891 462620 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/apiserver.crt.eb39f9d8 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/apiserver.crt
I0516 23:09:25.063952 462620 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/apiserver.key.eb39f9d8 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/apiserver.key
I0516 23:09:25.064004 462620 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/proxy-client.key
I0516 23:09:25.064016 462620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/proxy-client.crt with IP's: []
I0516 23:09:25.206230 462620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/proxy-client.crt ...
I0516 23:09:25.206248 462620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/proxy-client.crt: {Name:mke5cc19df1f540735cdb9c62710dd13ec5b419e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0516 23:09:25.206511 462620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/proxy-client.key ...
I0516 23:09:25.206537 462620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/proxy-client.key: {Name:mk2c1dc7269826fd3405a3075752f789d738827f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0516 23:09:25.206907 462620 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/297512.pem (1338 bytes)
W0516 23:09:25.206966 462620 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/297512_empty.pem, impossibly tiny 0 bytes
I0516 23:09:25.206983 462620 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/ca-key.pem (1675 bytes)
I0516 23:09:25.207020 462620 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/ca.pem (1082 bytes)
I0516 23:09:25.207052 462620 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/cert.pem (1123 bytes)
I0516 23:09:25.207082 462620 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/key.pem (1679 bytes)
I0516 23:09:25.207144 462620 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/files/etc/ssl/certs/2975122.pem (1708 bytes)
I0516 23:09:25.207736 462620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1452 bytes)
I0516 23:09:25.226660 462620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0516 23:09:25.243567 462620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0516 23:09:25.259643 462620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/cert-options-20220516230904-297512/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0516 23:09:25.275893 462620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0516 23:09:25.292828 462620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0516 23:09:25.309161 462620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0516 23:09:25.325433 462620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0516 23:09:25.341957 462620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/files/etc/ssl/certs/2975122.pem --> /usr/share/ca-certificates/2975122.pem (1708 bytes)
I0516 23:09:25.358525 462620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0516 23:09:25.448673 462620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/297512.pem --> /usr/share/ca-certificates/297512.pem (1338 bytes)
I0516 23:09:25.469816 462620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
I0516 23:09:25.484729 462620 ssh_runner.go:195] Run: openssl version
I0516 23:09:25.491030 462620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2975122.pem && ln -fs /usr/share/ca-certificates/2975122.pem /etc/ssl/certs/2975122.pem"
I0516 23:09:25.500720 462620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2975122.pem
I0516 23:09:25.504539 462620 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 16 21:59 /usr/share/ca-certificates/2975122.pem
I0516 23:09:25.504582 462620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2975122.pem
I0516 23:09:25.509961 462620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2975122.pem /etc/ssl/certs/3ec20f2e.0"
I0516 23:09:25.518222 462620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0516 23:09:25.525888 462620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0516 23:09:25.529541 462620 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 16 21:53 /usr/share/ca-certificates/minikubeCA.pem
I0516 23:09:25.529576 462620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0516 23:09:25.535144 462620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0516 23:09:25.543458 462620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/297512.pem && ln -fs /usr/share/ca-certificates/297512.pem /etc/ssl/certs/297512.pem"
I0516 23:09:25.552262 462620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/297512.pem
I0516 23:09:25.555722 462620 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 16 21:59 /usr/share/ca-certificates/297512.pem
I0516 23:09:25.555755 462620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/297512.pem
I0516 23:09:25.562050 462620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/297512.pem /etc/ssl/certs/51391683.0"
I0516 23:09:25.570761 462620 kubeadm.go:391] StartCluster: {Name:cert-options-20220516230904-297512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cert-options-20220516230904-297512 Namespace:default APIServerName:minikubeCA APISe
rverNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8555 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0516 23:09:25.570847 462620 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0516 23:09:25.570884 462620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0516 23:09:25.598956 462620 cri.go:87] found id: ""
I0516 23:09:25.598999 462620 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0516 23:09:25.606642 462620 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0516 23:09:25.613709 462620 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0516 23:09:25.613748 462620 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0516 23:09:25.620582 462620 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0516 23:09:25.620610 462620 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0516 23:09:28.863247 459592 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.23.6: (1.307968762s)
I0516 23:09:28.863275 459592 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6 from cache
I0516 23:09:28.863303 459592 containerd.go:287] Loading image: /var/lib/minikube/images/kube-proxy_v1.23.6
I0516 23:09:28.863342 459592 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.23.6
I0516 23:09:30.469753 459592 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.23.6: (1.606378882s)
I0516 23:09:30.469785 459592 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6 from cache
I0516 23:09:30.469810 459592 containerd.go:287] Loading image: /var/lib/minikube/images/etcd_3.5.1-0
I0516 23:09:30.469852 459592 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.1-0
I0516 23:09:30.424783 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:32.924594 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:37.931789 459592 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.1-0: (7.461903841s)
I0516 23:09:37.931822 459592 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 from cache
I0516 23:09:37.931848 459592 cache_images.go:123] Successfully loaded all cached images
I0516 23:09:37.931857 459592 cache_images.go:92] LoadImages completed in 18.430115192s
I0516 23:09:37.931914 459592 ssh_runner.go:195] Run: sudo crictl info
I0516 23:09:37.975557 459592 cni.go:95] Creating CNI manager for ""
I0516 23:09:37.975583 459592 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0516 23:09:37.975599 459592 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0516 23:09:37.975624 459592 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220516230810-297512 NodeName:kubernetes-upgrade-20220516230810-297512 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDrive
r:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0516 23:09:37.975771 459592 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "kubernetes-upgrade-20220516230810-297512"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.23.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0516 23:09:37.975887 459592 kubeadm.go:936] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-20220516230810-297512 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.23.6 ClusterName:kubernetes-upgrade-20220516230810-297512 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0516 23:09:37.975961 459592 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
I0516 23:09:37.984432 459592 binaries.go:44] Found k8s binaries, skipping transfer
I0516 23:09:37.984497 459592 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0516 23:09:37.991426 459592 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (585 bytes)
I0516 23:09:38.006327 459592 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0516 23:09:38.019328 459592 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
I0516 23:09:38.038188 459592 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I0516 23:09:38.042596 459592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0516 23:09:38.059688 459592 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/kubernetes-upgrade-20220516230810-297512 for IP: 192.168.67.2
I0516 23:09:38.059814 459592 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/ca.key
I0516 23:09:38.059864 459592 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/proxy-client-ca.key
I0516 23:09:38.060009 459592 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/kubernetes-upgrade-20220516230810-297512/client.key
I0516 23:09:38.060083 459592 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/kubernetes-upgrade-20220516230810-297512/apiserver.key.c7fa3a9e
I0516 23:09:38.060134 459592 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/kubernetes-upgrade-20220516230810-297512/proxy-client.key
I0516 23:09:38.060256 459592 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/297512.pem (1338 bytes)
W0516 23:09:38.060292 459592 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/297512_empty.pem, impossibly tiny 0 bytes
I0516 23:09:38.060304 459592 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/ca-key.pem (1675 bytes)
I0516 23:09:38.060336 459592 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/ca.pem (1082 bytes)
I0516 23:09:38.060366 459592 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/cert.pem (1123 bytes)
I0516 23:09:38.060393 459592 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/key.pem (1679 bytes)
I0516 23:09:38.060451 459592 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/files/etc/ssl/certs/2975122.pem (1708 bytes)
I0516 23:09:38.061338 459592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/kubernetes-upgrade-20220516230810-297512/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0516 23:09:38.085769 459592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/kubernetes-upgrade-20220516230810-297512/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0516 23:09:38.102742 459592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/kubernetes-upgrade-20220516230810-297512/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0516 23:09:38.118965 459592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/kubernetes-upgrade-20220516230810-297512/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0516 23:09:38.144014 459592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0516 23:09:38.166078 459592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0516 23:09:38.182622 459592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0516 23:09:38.199058 459592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0516 23:09:38.220271 459592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/certs/297512.pem --> /usr/share/ca-certificates/297512.pem (1338 bytes)
I0516 23:09:38.241024 459592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/files/etc/ssl/certs/2975122.pem --> /usr/share/ca-certificates/2975122.pem (1708 bytes)
I0516 23:09:38.263072 459592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0516 23:09:38.284479 459592 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
I0516 23:09:35.425482 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:37.925010 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:42.300409 462620 out.go:204] - Generating certificates and keys ...
I0516 23:09:42.304323 462620 out.go:204] - Booting up control plane ...
I0516 23:09:42.307017 462620 out.go:204] - Configuring RBAC rules ...
I0516 23:09:42.308698 462620 cni.go:95] Creating CNI manager for ""
I0516 23:09:42.308704 462620 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0516 23:09:42.310142 462620 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0516 23:09:38.298143 459592 ssh_runner.go:195] Run: openssl version
I0516 23:09:38.302927 459592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/297512.pem && ln -fs /usr/share/ca-certificates/297512.pem /etc/ssl/certs/297512.pem"
I0516 23:09:38.312018 459592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/297512.pem
I0516 23:09:38.315372 459592 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 16 21:59 /usr/share/ca-certificates/297512.pem
I0516 23:09:38.315420 459592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/297512.pem
I0516 23:09:38.320046 459592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/297512.pem /etc/ssl/certs/51391683.0"
I0516 23:09:38.326904 459592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2975122.pem && ln -fs /usr/share/ca-certificates/2975122.pem /etc/ssl/certs/2975122.pem"
I0516 23:09:38.334323 459592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2975122.pem
I0516 23:09:38.337413 459592 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 16 21:59 /usr/share/ca-certificates/2975122.pem
I0516 23:09:38.337459 459592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2975122.pem
I0516 23:09:38.342603 459592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2975122.pem /etc/ssl/certs/3ec20f2e.0"
I0516 23:09:38.349776 459592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0516 23:09:38.357768 459592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0516 23:09:38.361102 459592 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 16 21:53 /usr/share/ca-certificates/minikubeCA.pem
I0516 23:09:38.361150 459592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0516 23:09:38.366292 459592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0516 23:09:38.376891 459592 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-20220516230810-297512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.31@sha256:c3375f1b260bd936aa532a0c749626e07d94ab129a7f2395e95345aa04ca708c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kubernetes-upgrade-20220516230810-297512 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0516 23:09:38.376995 459592 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0516 23:09:38.377095 459592 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0516 23:09:38.402184 459592 cri.go:87] found id: ""
I0516 23:09:38.402241 459592 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0516 23:09:38.408625 459592 kubeadm.go:402] found existing configuration files, will attempt cluster restart
I0516 23:09:38.408645 459592 kubeadm.go:601] restartCluster start
I0516 23:09:38.408688 459592 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0516 23:09:38.418942 459592 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0516 23:09:38.419625 459592 kubeconfig.go:116] verify returned: extract IP: "kubernetes-upgrade-20220516230810-297512" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/kubeconfig
I0516 23:09:38.419969 459592 kubeconfig.go:127] "kubernetes-upgrade-20220516230810-297512" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/kubeconfig - will repair!
I0516 23:09:38.420625 459592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/kubeconfig: {Name:mkd95e9ac27518d5cd4baf4bf5f31080484189e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0516 23:09:38.421449 459592 kapi.go:59] client config for kubernetes-upgrade-20220516230810-297512: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/profiles/kubernetes-upgrade-20220516230810-297512/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.m
inikube/profiles/kubernetes-upgrade-20220516230810-297512/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1702580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0516 23:09:38.422049 459592 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0516 23:09:38.430110 459592 kubeadm.go:569] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml 2022-05-16 23:08:22.717165727 +0000
+++ /var/tmp/minikube/kubeadm.yaml.new 2022-05-16 23:09:38.031823481 +0000
@@ -1,4 +1,4 @@
-apiVersion: kubeadm.k8s.io/v1beta1
+apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
@@ -17,7 +17,7 @@
node-ip: 192.168.67.2
taints: []
---
-apiVersion: kubeadm.k8s.io/v1beta1
+apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
@@ -31,16 +31,14 @@
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
-clusterName: kubernetes-upgrade-20220516230810-297512
+clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
-dns:
- type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
-kubernetesVersion: v1.16.0
+ proxy-refresh-interval: "70000"
+kubernetesVersion: v1.23.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
-- /stdout --
I0516 23:09:38.430136 459592 kubeadm.go:1067] stopping kube-system containers ...
I0516 23:09:38.430151 459592 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0516 23:09:38.430190 459592 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0516 23:09:38.461225 459592 cri.go:87] found id: ""
I0516 23:09:38.461288 459592 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0516 23:09:38.475357 459592 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0516 23:09:38.484463 459592 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5763 May 16 23:08 /etc/kubernetes/admin.conf
-rw------- 1 root root 5799 May 16 23:08 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 5967 May 16 23:08 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5751 May 16 23:08 /etc/kubernetes/scheduler.conf
I0516 23:09:38.484516 459592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0516 23:09:38.492080 459592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0516 23:09:38.498707 459592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0516 23:09:38.505379 459592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0516 23:09:38.512442 459592 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0516 23:09:38.519034 459592 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0516 23:09:38.519054 459592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0516 23:09:38.572321 459592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0516 23:09:39.455865 459592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0516 23:09:39.631394 459592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0516 23:09:39.688910 459592 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0516 23:09:39.746273 459592 api_server.go:51] waiting for apiserver process to appear ...
I0516 23:09:39.746336 459592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0516 23:09:40.256826 459592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0516 23:09:40.756816 459592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0516 23:09:41.256788 459592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0516 23:09:41.757262 459592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0516 23:09:42.256474 459592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0516 23:09:42.756974 459592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0516 23:09:43.256285 459592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0516 23:09:40.425167 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:42.425352 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:42.311389 462620 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0516 23:09:42.314668 462620 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
I0516 23:09:42.314678 462620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0516 23:09:42.328129 462620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0516 23:09:43.036213 462620 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0516 23:09:43.036279 462620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:09:43.036294 462620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.0 minikube.k8s.io/commit=8e10bad027676fc4eb80b4901727275dc6ddebc2 minikube.k8s.io/name=cert-options-20220516230904-297512 minikube.k8s.io/updated_at=2022_05_16T23_09_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0516 23:09:43.132600 462620 ops.go:34] apiserver oom_adj: -16
I0516 23:09:43.132636 462620 kubeadm.go:1020] duration metric: took 96.407121ms to wait for elevateKubeSystemPrivileges.
I0516 23:09:43.132656 462620 kubeadm.go:393] StartCluster complete in 17.561904671s
I0516 23:09:43.132673 462620 settings.go:142] acquiring lock: {Name:mk9ef5cf2a3a16dfc0f8f117e884e02b4660452f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0516 23:09:43.132766 462620 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/kubeconfig
I0516 23:09:43.133907 462620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/kubeconfig: {Name:mkd95e9ac27518d5cd4baf4bf5f31080484189e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0516 23:09:43.649728 462620 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cert-options-20220516230904-297512" rescaled to 1
I0516 23:09:43.649783 462620 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8555 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0516 23:09:43.651120 462620 out.go:177] * Verifying Kubernetes components...
I0516 23:09:43.649848 462620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0516 23:09:43.649861 462620 addons.go:415] enableAddons start: toEnable=map[], additional=[]
I0516 23:09:43.650109 462620 config.go:178] Loaded profile config "cert-options-20220516230904-297512": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
I0516 23:09:43.652422 462620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0516 23:09:43.652447 462620 addons.go:65] Setting default-storageclass=true in profile "cert-options-20220516230904-297512"
I0516 23:09:43.652447 462620 addons.go:65] Setting storage-provisioner=true in profile "cert-options-20220516230904-297512"
I0516 23:09:43.652465 462620 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-options-20220516230904-297512"
I0516 23:09:43.652468 462620 addons.go:153] Setting addon storage-provisioner=true in "cert-options-20220516230904-297512"
W0516 23:09:43.652475 462620 addons.go:165] addon storage-provisioner should already be in state true
I0516 23:09:43.652520 462620 host.go:66] Checking if "cert-options-20220516230904-297512" exists ...
I0516 23:09:43.652860 462620 cli_runner.go:164] Run: docker container inspect cert-options-20220516230904-297512 --format={{.State.Status}}
I0516 23:09:43.653022 462620 cli_runner.go:164] Run: docker container inspect cert-options-20220516230904-297512 --format={{.State.Status}}
I0516 23:09:43.693018 462620 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0516 23:09:43.694434 462620 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0516 23:09:43.694445 462620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0516 23:09:43.694499 462620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20220516230904-297512
I0516 23:09:43.701319 462620 addons.go:153] Setting addon default-storageclass=true in "cert-options-20220516230904-297512"
W0516 23:09:43.701339 462620 addons.go:165] addon default-storageclass should already be in state true
I0516 23:09:43.701371 462620 host.go:66] Checking if "cert-options-20220516230904-297512" exists ...
I0516 23:09:43.701976 462620 cli_runner.go:164] Run: docker container inspect cert-options-20220516230904-297512 --format={{.State.Status}}
I0516 23:09:43.718383 462620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0516 23:09:43.719452 462620 api_server.go:51] waiting for apiserver process to appear ...
I0516 23:09:43.719492 462620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0516 23:09:43.732949 462620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49632 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/cert-options-20220516230904-297512/id_rsa Username:docker}
I0516 23:09:43.739474 462620 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
I0516 23:09:43.739488 462620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0516 23:09:43.739530 462620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20220516230904-297512
I0516 23:09:43.785355 462620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49632 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-294139-1d35e4fcf9c08f3bcb7cde44e4ac5542113a5f2f/.minikube/machines/cert-options-20220516230904-297512/id_rsa Username:docker}
I0516 23:09:43.840282 462620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0516 23:09:43.938093 462620 start.go:815] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
I0516 23:09:43.938183 462620 api_server.go:71] duration metric: took 288.361909ms to wait for apiserver process to appear ...
I0516 23:09:43.938199 462620 api_server.go:87] waiting for apiserver healthz status ...
I0516 23:09:43.938212 462620 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8555/healthz ...
I0516 23:09:43.939921 462620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0516 23:09:43.943647 462620 api_server.go:266] https://192.168.49.2:8555/healthz returned 200:
ok
I0516 23:09:43.944759 462620 api_server.go:140] control plane version: v1.23.6
I0516 23:09:43.944769 462620 api_server.go:130] duration metric: took 6.565415ms to wait for apiserver health ...
I0516 23:09:43.944776 462620 system_pods.go:43] waiting for kube-system pods to appear ...
I0516 23:09:43.949689 462620 system_pods.go:59] 2 kube-system pods found
I0516 23:09:43.949700 462620 system_pods.go:61] "etcd-cert-options-20220516230904-297512" [5cffacef-0549-4a64-aea3-1e728f7f9c04] Pending
I0516 23:09:43.949704 462620 system_pods.go:61] "kube-apiserver-cert-options-20220516230904-297512" [6794334f-dc9a-4df1-ac03-b623f4731e9c] Pending
I0516 23:09:43.949708 462620 system_pods.go:74] duration metric: took 4.927616ms to wait for pod list to return data ...
I0516 23:09:43.949715 462620 kubeadm.go:548] duration metric: took 299.909296ms to wait for : map[apiserver:true system_pods:true] ...
I0516 23:09:43.949725 462620 node_conditions.go:102] verifying NodePressure condition ...
I0516 23:09:43.952223 462620 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
I0516 23:09:43.952234 462620 node_conditions.go:123] node cpu capacity is 8
I0516 23:09:43.952243 462620 node_conditions.go:105] duration metric: took 2.514834ms to run NodePressure ...
I0516 23:09:43.952252 462620 start.go:213] waiting for startup goroutines ...
I0516 23:09:44.159385 462620 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0516 23:09:44.160510 462620 addons.go:417] enableAddons completed in 510.658464ms
I0516 23:09:44.200106 462620 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
I0516 23:09:44.202034 462620 out.go:177] * Done! kubectl is now configured to use "cert-options-20220516230904-297512" cluster and "default" namespace by default
I0516 23:09:44.925083 422378 node_ready.go:58] node "offline-containerd-20220516230448-297512" has status "Ready":"False"
I0516 23:09:44.927447 422378 node_ready.go:38] duration metric: took 4m0.009934675s waiting for node "offline-containerd-20220516230448-297512" to be "Ready" ...
I0516 23:09:44.930088 422378 out.go:177]
W0516 23:09:44.931496 422378 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
W0516 23:09:44.931518 422378 out.go:239] *
W0516 23:09:44.932297 422378 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0516 23:09:44.933812 422378 out.go:177]
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
9eda85f75c455 6de166512aa22 About a minute ago Running kindnet-cni 1 e8caa25cf9d80
3cfc0499a1923 6de166512aa22 4 minutes ago Exited kindnet-cni 0 e8caa25cf9d80
abbc2781d4d7a 4c03754524064 4 minutes ago Running kube-proxy 0 7a8bb11262539
2741a6aed88ad 25f8c7f3da61c 4 minutes ago Running etcd 0 784e1017ff6d8
b061befa49e6a df7b72818ad2e 4 minutes ago Running kube-controller-manager 0 f4e528dde2565
a0166bfb7c29b 595f327f224a4 4 minutes ago Running kube-scheduler 0 1b4d85e6c6e5e
b24ba74efb34d 8fa62c12256df 4 minutes ago Running kube-apiserver 0 177a3113774dd
*
* ==> containerd <==
* -- Logs begin at Mon 2022-05-16 23:05:12 UTC, end at Mon 2022-05-16 23:09:46 UTC. --
May 16 23:05:44 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:05:44.271510150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 16 23:05:44 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:05:44.271525223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 16 23:05:44 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:05:44.271813135Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e8caa25cf9d8089dd4ab0577eae76c33ad39c1b8867fe71e6e641fe4e983116b pid=1682 runtime=io.containerd.runc.v2
May 16 23:05:44 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:05:44.276785355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
May 16 23:05:44 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:05:44.276868212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 16 23:05:44 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:05:44.276882791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 16 23:05:44 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:05:44.277091445Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a8bb11262539e54f81b95dd4a59f43fafb693d541d549e51d54ab88794daa12 pid=1696 runtime=io.containerd.runc.v2
May 16 23:05:44 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:05:44.377824802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-blcfp,Uid:04a9970b-052e-4b83-aa0a-a3fbf078c2cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a8bb11262539e54f81b95dd4a59f43fafb693d541d549e51d54ab88794daa12\""
May 16 23:05:44 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:05:44.382684823Z" level=info msg="CreateContainer within sandbox \"7a8bb11262539e54f81b95dd4a59f43fafb693d541d549e51d54ab88794daa12\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
May 16 23:05:44 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:05:44.403252238Z" level=info msg="CreateContainer within sandbox \"7a8bb11262539e54f81b95dd4a59f43fafb693d541d549e51d54ab88794daa12\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"abbc2781d4d7a63109aa531ae406c8f0b6c946b2b001ed4fe522c33688fadb04\""
May 16 23:05:44 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:05:44.403907277Z" level=info msg="StartContainer for \"abbc2781d4d7a63109aa531ae406c8f0b6c946b2b001ed4fe522c33688fadb04\""
May 16 23:05:44 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:05:44.511345054Z" level=info msg="StartContainer for \"abbc2781d4d7a63109aa531ae406c8f0b6c946b2b001ed4fe522c33688fadb04\" returns successfully"
May 16 23:05:44 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:05:44.627304996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-lrkph,Uid:67ed017b-ea8c-4f9c-8269-6057bd8bbf5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8caa25cf9d8089dd4ab0577eae76c33ad39c1b8867fe71e6e641fe4e983116b\""
May 16 23:05:44 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:05:44.637441996Z" level=info msg="CreateContainer within sandbox \"e8caa25cf9d8089dd4ab0577eae76c33ad39c1b8867fe71e6e641fe4e983116b\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
May 16 23:05:44 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:05:44.655363253Z" level=info msg="CreateContainer within sandbox \"e8caa25cf9d8089dd4ab0577eae76c33ad39c1b8867fe71e6e641fe4e983116b\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"3cfc0499a19234c6e92c82945e034cd3e9260d097a4b5004b79ca0ceec60d7ec\""
May 16 23:05:44 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:05:44.655974487Z" level=info msg="StartContainer for \"3cfc0499a19234c6e92c82945e034cd3e9260d097a4b5004b79ca0ceec60d7ec\""
May 16 23:05:44 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:05:44.746872051Z" level=info msg="StartContainer for \"3cfc0499a19234c6e92c82945e034cd3e9260d097a4b5004b79ca0ceec60d7ec\" returns successfully"
May 16 23:08:25 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:08:25.063584607Z" level=info msg="shim disconnected" id=3cfc0499a19234c6e92c82945e034cd3e9260d097a4b5004b79ca0ceec60d7ec
May 16 23:08:25 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:08:25.063653247Z" level=warning msg="cleaning up after shim disconnected" id=3cfc0499a19234c6e92c82945e034cd3e9260d097a4b5004b79ca0ceec60d7ec namespace=k8s.io
May 16 23:08:25 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:08:25.063670405Z" level=info msg="cleaning up dead shim"
May 16 23:08:25 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:08:25.074243393Z" level=warning msg="cleanup warnings time=\"2022-05-16T23:08:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2068 runtime=io.containerd.runc.v2\n"
May 16 23:08:25 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:08:25.317934407Z" level=info msg="CreateContainer within sandbox \"e8caa25cf9d8089dd4ab0577eae76c33ad39c1b8867fe71e6e641fe4e983116b\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
May 16 23:08:25 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:08:25.332488780Z" level=info msg="CreateContainer within sandbox \"e8caa25cf9d8089dd4ab0577eae76c33ad39c1b8867fe71e6e641fe4e983116b\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"9eda85f75c4556f68eda8cc6fbe62eb0ef68fdbe70da196ace02bdffe990e95f\""
May 16 23:08:25 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:08:25.332961967Z" level=info msg="StartContainer for \"9eda85f75c4556f68eda8cc6fbe62eb0ef68fdbe70da196ace02bdffe990e95f\""
May 16 23:08:25 offline-containerd-20220516230448-297512 containerd[500]: time="2022-05-16T23:08:25.445307245Z" level=info msg="StartContainer for \"9eda85f75c4556f68eda8cc6fbe62eb0ef68fdbe70da196ace02bdffe990e95f\" returns successfully"
*
* ==> describe nodes <==
* Name: offline-containerd-20220516230448-297512
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=offline-containerd-20220516230448-297512
kubernetes.io/os=linux
minikube.k8s.io/commit=8e10bad027676fc4eb80b4901727275dc6ddebc2
minikube.k8s.io/name=offline-containerd-20220516230448-297512
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_05_16T23_05_31_0700
minikube.k8s.io/version=v1.26.0-beta.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 16 May 2022 23:05:28 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: offline-containerd-20220516230448-297512
AcquireTime: <unset>
RenewTime: Mon, 16 May 2022 23:09:41 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 16 May 2022 23:05:43 +0000 Mon, 16 May 2022 23:05:25 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 16 May 2022 23:05:43 +0000 Mon, 16 May 2022 23:05:25 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 16 May 2022 23:05:43 +0000 Mon, 16 May 2022 23:05:25 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Mon, 16 May 2022 23:05:43 +0000 Mon, 16 May 2022 23:05:25 +0000 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Addresses:
InternalIP: 192.168.58.2
Hostname: offline-containerd-20220516230448-297512
Capacity:
cpu: 8
ephemeral-storage: 304695084Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32873820Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304695084Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32873820Ki
pods: 110
System Info:
Machine ID: 1729fd8b7c184ebda96a08181510f608
System UUID: acf4172f-e357-467d-b27b-f2c144f29256
Boot ID: 7a1f1533-e2a9-44d7-ae8f-18c0c9cd2904
Kernel Version: 5.13.0-1025-gcp
OS Image: Ubuntu 20.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.6.4
Kubelet Version: v1.23.6
Kube-Proxy Version: v1.23.6
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system etcd-offline-containerd-20220516230448-297512 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 4m10s
kube-system kindnet-lrkph 100m (1%!)(MISSING) 100m (1%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 4m3s
kube-system kube-apiserver-offline-containerd-20220516230448-297512 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m10s
kube-system kube-controller-manager-offline-containerd-20220516230448-297512 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m10s
kube-system kube-proxy-blcfp 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m3s
kube-system kube-scheduler-offline-containerd-20220516230448-297512 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m10s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (9%!)(MISSING) 100m (1%!)(MISSING)
memory 150Mi (0%!)(MISSING) 50Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m1s kube-proxy
Normal Starting 4m11s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m11s kubelet Node offline-containerd-20220516230448-297512 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m11s kubelet Node offline-containerd-20220516230448-297512 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m11s kubelet Node offline-containerd-20220516230448-297512 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m10s kubelet Updated Node Allocatable limit across pods
*
* ==> dmesg <==
* [ +0.000006] ll header: 00000000: 02 42 d6 be 4a 2a 02 42 c0 a8 31 02 08 00
[ +5.003520] IPv4: martian source 10.244.0.228 from 10.244.1.2, on dev br-c71c9375698e
[ +0.000007] ll header: 00000000: 02 42 d6 be 4a 2a 02 42 c0 a8 31 02 08 00
[May16 22:53] IPv4: martian source 10.244.0.228 from 10.244.1.2, on dev br-c71c9375698e
[ +0.000006] ll header: 00000000: 02 42 d6 be 4a 2a 02 42 c0 a8 31 02 08 00
[ +5.003332] IPv4: martian source 10.244.0.228 from 10.244.1.2, on dev br-c71c9375698e
[ +0.000007] ll header: 00000000: 02 42 d6 be 4a 2a 02 42 c0 a8 31 02 08 00
[May16 22:56] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethbc87e243
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 90 01 26 2c 0d 08 06
[ +0.448207] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vetha8dca083
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 06 75 d8 71 9b 08 06
[May16 22:57] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev veth5b6bc2ad
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 36 c5 6e 9c fa 60 08 06
[May16 22:58] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth84b6efc8
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff fe a5 65 6a 93 49 08 06
[ +0.336160] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth7c55a01d
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff e6 d7 fa b9 f1 08 08 06
[May16 22:59] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev vethe3b519ca
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 47 14 67 57 93 08 06
[May16 23:01] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth957c255b
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 4a f8 a9 74 ac 08 06
[May16 23:03] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth694975fb
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 86 0c 93 32 b4 dc 08 06
[May16 23:05] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethcc5635c5
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 fb 53 6f 71 77 08 06
*
* ==> etcd [2741a6aed88addb7f5b9dbe2f9a16e6a7aef098a4d7ea294060f1fa3e9b7914e] <==
* {"level":"warn","ts":"2022-05-16T23:07:08.025Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"141.630177ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-05-16T23:07:08.025Z","caller":"traceutil/trace.go:171","msg":"trace[1751340739] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:486; }","duration":"141.802873ms","start":"2022-05-16T23:07:07.883Z","end":"2022-05-16T23:07:08.025Z","steps":["trace[1751340739] 'range keys from in-memory index tree' (duration: 141.547614ms)"],"step_count":1}
{"level":"info","ts":"2022-05-16T23:07:09.639Z","caller":"traceutil/trace.go:171","msg":"trace[205794014] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"176.422808ms","start":"2022-05-16T23:07:09.462Z","end":"2022-05-16T23:07:09.639Z","steps":["trace[205794014] 'process raft request' (duration: 88.255195ms)","trace[205794014] 'compare' (duration: 88.031424ms)"],"step_count":2}
{"level":"info","ts":"2022-05-16T23:07:09.756Z","caller":"traceutil/trace.go:171","msg":"trace[532991568] linearizableReadLoop","detail":"{readStateIndex:518; appliedIndex:518; }","duration":"115.80087ms","start":"2022-05-16T23:07:09.640Z","end":"2022-05-16T23:07:09.756Z","steps":["trace[532991568] 'read index received' (duration: 115.78606ms)","trace[532991568] 'applied index is now lower than readState.Index' (duration: 12.87µs)"],"step_count":2}
{"level":"warn","ts":"2022-05-16T23:07:09.846Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"205.785947ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
{"level":"info","ts":"2022-05-16T23:07:09.846Z","caller":"traceutil/trace.go:171","msg":"trace[1024091837] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:487; }","duration":"205.865895ms","start":"2022-05-16T23:07:09.640Z","end":"2022-05-16T23:07:09.846Z","steps":["trace[1024091837] 'agreement among raft nodes before linearized reading' (duration: 115.916089ms)","trace[1024091837] 'range keys from in-memory index tree' (duration: 89.832918ms)"],"step_count":2}
{"level":"warn","ts":"2022-05-16T23:07:39.664Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"105.826443ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3238511234114624185 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.58.2\" mod_revision:491 > success:<request_put:<key:\"/registry/masterleases/192.168.58.2\" value_size:67 lease:3238511234114624183 >> failure:<request_range:<key:\"/registry/masterleases/192.168.58.2\" > >>","response":"size:16"}
{"level":"info","ts":"2022-05-16T23:07:39.664Z","caller":"traceutil/trace.go:171","msg":"trace[1350902669] transaction","detail":"{read_only:false; response_revision:493; number_of_response:1; }","duration":"199.965973ms","start":"2022-05-16T23:07:39.464Z","end":"2022-05-16T23:07:39.664Z","steps":["trace[1350902669] 'process raft request' (duration: 94.006476ms)","trace[1350902669] 'compare' (duration: 105.733943ms)"],"step_count":2}
{"level":"warn","ts":"2022-05-16T23:07:44.990Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"107.863781ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-05-16T23:07:44.990Z","caller":"traceutil/trace.go:171","msg":"trace[637190637] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:493; }","duration":"107.961438ms","start":"2022-05-16T23:07:44.882Z","end":"2022-05-16T23:07:44.990Z","steps":["trace[637190637] 'range keys from in-memory index tree' (duration: 107.793559ms)"],"step_count":1}
{"level":"warn","ts":"2022-05-16T23:08:09.557Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"135.087856ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/offline-containerd-20220516230448-297512\" ","response":"range_response_count:1 size:4821"}
{"level":"info","ts":"2022-05-16T23:08:09.558Z","caller":"traceutil/trace.go:171","msg":"trace[884103644] range","detail":"{range_begin:/registry/minions/offline-containerd-20220516230448-297512; range_end:; response_count:1; response_revision:500; }","duration":"135.172487ms","start":"2022-05-16T23:08:09.422Z","end":"2022-05-16T23:08:09.558Z","steps":["trace[884103644] 'range keys from in-memory index tree' (duration: 134.977975ms)"],"step_count":1}
{"level":"warn","ts":"2022-05-16T23:08:15.040Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"157.674918ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-05-16T23:08:15.040Z","caller":"traceutil/trace.go:171","msg":"trace[1333512302] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:501; }","duration":"157.776617ms","start":"2022-05-16T23:08:14.882Z","end":"2022-05-16T23:08:15.040Z","steps":["trace[1333512302] 'agreement among raft nodes before linearized reading' (duration: 60.82835ms)","trace[1333512302] 'range keys from in-memory index tree' (duration: 96.823352ms)"],"step_count":2}
{"level":"warn","ts":"2022-05-16T23:08:15.040Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"117.258734ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/offline-containerd-20220516230448-297512\" ","response":"range_response_count:1 size:4821"}
{"level":"info","ts":"2022-05-16T23:08:15.040Z","caller":"traceutil/trace.go:171","msg":"trace[661368584] range","detail":"{range_begin:/registry/minions/offline-containerd-20220516230448-297512; range_end:; response_count:1; response_revision:501; }","duration":"117.485939ms","start":"2022-05-16T23:08:14.923Z","end":"2022-05-16T23:08:15.040Z","steps":["trace[661368584] 'agreement among raft nodes before linearized reading' (duration: 20.43642ms)","trace[661368584] 'range keys from in-memory index tree' (duration: 96.786164ms)"],"step_count":2}
{"level":"warn","ts":"2022-05-16T23:08:55.114Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"191.291573ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/offline-containerd-20220516230448-297512\" ","response":"range_response_count:1 size:4821"}
{"level":"info","ts":"2022-05-16T23:08:55.115Z","caller":"traceutil/trace.go:171","msg":"trace[681987622] range","detail":"{range_begin:/registry/minions/offline-containerd-20220516230448-297512; range_end:; response_count:1; response_revision:516; }","duration":"191.382605ms","start":"2022-05-16T23:08:54.923Z","end":"2022-05-16T23:08:55.115Z","steps":["trace[681987622] 'agreement among raft nodes before linearized reading' (duration: 94.244981ms)","trace[681987622] 'range keys from in-memory index tree' (duration: 97.000551ms)"],"step_count":2}
{"level":"info","ts":"2022-05-16T23:09:19.624Z","caller":"traceutil/trace.go:171","msg":"trace[1842061183] transaction","detail":"{read_only:false; response_revision:523; number_of_response:1; }","duration":"152.93279ms","start":"2022-05-16T23:09:19.471Z","end":"2022-05-16T23:09:19.624Z","steps":["trace[1842061183] 'process raft request' (duration: 152.808808ms)"],"step_count":1}
{"level":"warn","ts":"2022-05-16T23:09:21.234Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"311.175714ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/offline-containerd-20220516230448-297512\" ","response":"range_response_count:1 size:4821"}
{"level":"info","ts":"2022-05-16T23:09:21.234Z","caller":"traceutil/trace.go:171","msg":"trace[897822710] range","detail":"{range_begin:/registry/minions/offline-containerd-20220516230448-297512; range_end:; response_count:1; response_revision:524; }","duration":"311.280663ms","start":"2022-05-16T23:09:20.923Z","end":"2022-05-16T23:09:21.234Z","steps":["trace[897822710] 'agreement among raft nodes before linearized reading' (duration: 52.619525ms)","trace[897822710] 'range keys from in-memory index tree' (duration: 258.522557ms)"],"step_count":2}
{"level":"warn","ts":"2022-05-16T23:09:21.234Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"352.246676ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2022-05-16T23:09:21.234Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-16T23:09:20.923Z","time spent":"311.343526ms","remote":"127.0.0.1:50460","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4844,"request content":"key:\"/registry/minions/offline-containerd-20220516230448-297512\" "}
{"level":"info","ts":"2022-05-16T23:09:21.234Z","caller":"traceutil/trace.go:171","msg":"trace[1664993707] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:524; }","duration":"352.447292ms","start":"2022-05-16T23:09:20.882Z","end":"2022-05-16T23:09:21.234Z","steps":["trace[1664993707] 'agreement among raft nodes before linearized reading' (duration: 93.663805ms)","trace[1664993707] 'range keys from in-memory index tree' (duration: 258.550282ms)"],"step_count":2}
{"level":"warn","ts":"2022-05-16T23:09:21.235Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-16T23:09:20.882Z","time spent":"352.572146ms","remote":"127.0.0.1:50568","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
*
* ==> kernel <==
* 23:09:46 up 3:52, 0 users, load average: 4.76, 3.18, 1.61
Linux offline-containerd-20220516230448-297512 5.13.0-1025-gcp #30~20.04.1-Ubuntu SMP Tue Apr 26 03:01:25 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.4 LTS"
*
* ==> kube-apiserver [b24ba74efb34ded38a483cf7be54138c65c9ea2a38779259e546321718b361b6] <==
* I0516 23:05:28.051538 1 controller.go:611] quota admission added evaluator for: namespaces
I0516 23:05:28.060711 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0516 23:05:28.060714 1 apf_controller.go:322] Running API Priority and Fairness config worker
I0516 23:05:28.065134 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I0516 23:05:28.079655 1 cache.go:39] Caches are synced for autoregister controller
I0516 23:05:28.124993 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0516 23:05:28.960010 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0516 23:05:28.960039 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0516 23:05:28.964247 1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
I0516 23:05:28.968488 1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
I0516 23:05:28.968509 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
I0516 23:05:29.319714 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0516 23:05:29.353248 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0516 23:05:29.462432 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
W0516 23:05:29.467702 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
I0516 23:05:29.468755 1 controller.go:611] quota admission added evaluator for: endpoints
I0516 23:05:29.472348 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0516 23:05:30.145266 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0516 23:05:30.672890 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0516 23:05:30.728121 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
I0516 23:05:30.740309 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0516 23:05:35.851016 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0516 23:05:43.700649 1 controller.go:611] quota admission added evaluator for: replicasets.apps
I0516 23:05:43.901749 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
I0516 23:05:44.599862 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
*
* ==> kube-controller-manager [b061befa49e6a4e626340c185934e8ac56f42bb037ff0163e1299e4f9d4a5df6] <==
* I0516 23:05:42.999812 1 shared_informer.go:247] Caches are synced for attach detach
I0516 23:05:43.007063 1 shared_informer.go:247] Caches are synced for node
I0516 23:05:43.007089 1 range_allocator.go:173] Starting range CIDR allocator
I0516 23:05:43.007093 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
I0516 23:05:43.007099 1 shared_informer.go:247] Caches are synced for cidrallocator
I0516 23:05:43.010857 1 range_allocator.go:374] Set node offline-containerd-20220516230448-297512 PodCIDR to [10.244.0.0/24]
I0516 23:05:43.047666 1 shared_informer.go:247] Caches are synced for HPA
I0516 23:05:43.133427 1 shared_informer.go:247] Caches are synced for stateful set
I0516 23:05:43.145315 1 shared_informer.go:247] Caches are synced for cronjob
I0516 23:05:43.164773 1 shared_informer.go:247] Caches are synced for daemon sets
I0516 23:05:43.193035 1 shared_informer.go:247] Caches are synced for ReplicationController
I0516 23:05:43.199869 1 shared_informer.go:247] Caches are synced for resource quota
I0516 23:05:43.243551 1 shared_informer.go:247] Caches are synced for resource quota
I0516 23:05:43.246135 1 shared_informer.go:247] Caches are synced for disruption
I0516 23:05:43.246179 1 disruption.go:371] Sending events to api server.
I0516 23:05:43.623216 1 shared_informer.go:247] Caches are synced for garbage collector
I0516 23:05:43.666176 1 shared_informer.go:247] Caches are synced for garbage collector
I0516 23:05:43.666210 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0516 23:05:43.702530 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
I0516 23:05:43.910060 1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-lrkph"
I0516 23:05:43.914193 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-blcfp"
I0516 23:05:44.004343 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-jsrl9"
I0516 23:05:44.011303 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-826rd"
I0516 23:05:44.341161 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
I0516 23:05:44.350107 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-jsrl9"
*
* ==> kube-proxy [abbc2781d4d7a63109aa531ae406c8f0b6c946b2b001ed4fe522c33688fadb04] <==
* I0516 23:05:44.564360 1 node.go:163] Successfully retrieved node IP: 192.168.58.2
I0516 23:05:44.564425 1 server_others.go:138] "Detected node IP" address="192.168.58.2"
I0516 23:05:44.564447 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0516 23:05:44.594088 1 server_others.go:206] "Using iptables Proxier"
I0516 23:05:44.594127 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0516 23:05:44.594141 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0516 23:05:44.594162 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0516 23:05:44.594569 1 server.go:656] "Version info" version="v1.23.6"
I0516 23:05:44.596490 1 config.go:317] "Starting service config controller"
I0516 23:05:44.596514 1 shared_informer.go:240] Waiting for caches to sync for service config
I0516 23:05:44.596541 1 config.go:226] "Starting endpoint slice config controller"
I0516 23:05:44.596546 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0516 23:05:44.697452 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0516 23:05:44.697463 1 shared_informer.go:247] Caches are synced for service config
*
* ==> kube-scheduler [a0166bfb7c29b1c3f7eb4326fcff6e1df7b0d1ebb9cce0f53e16577a7ca12821] <==
* W0516 23:05:28.055658 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0516 23:05:28.055671 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0516 23:05:28.055625 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0516 23:05:28.055557 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0516 23:05:28.055735 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0516 23:05:28.055747 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0516 23:05:28.055798 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0516 23:05:28.055821 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0516 23:05:28.056030 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0516 23:05:28.056064 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0516 23:05:28.874642 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0516 23:05:28.874672 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0516 23:05:28.911850 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0516 23:05:28.911891 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0516 23:05:28.918889 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0516 23:05:28.918926 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0516 23:05:28.986751 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0516 23:05:28.986781 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0516 23:05:29.054805 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0516 23:05:29.055027 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0516 23:05:29.094058 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0516 23:05:29.094092 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0516 23:05:29.130222 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0516 23:05:29.130265 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
I0516 23:05:29.447820 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Mon 2022-05-16 23:05:12 UTC, end at Mon 2022-05-16 23:09:46 UTC. --
May 16 23:07:51 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:07:51.082682 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:07:56 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:07:56.084490 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:08:01 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:08:01.086599 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:08:06 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:08:06.087493 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:08:11 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:08:11.089197 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:08:16 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:08:16.089822 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:08:21 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:08:21.090753 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:08:25 offline-containerd-20220516230448-297512 kubelet[1289]: I0516 23:08:25.315496 1289 scope.go:110] "RemoveContainer" containerID="3cfc0499a19234c6e92c82945e034cd3e9260d097a4b5004b79ca0ceec60d7ec"
May 16 23:08:26 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:08:26.092294 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:08:31 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:08:31.093543 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:08:36 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:08:36.095093 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:08:41 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:08:41.095954 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:08:46 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:08:46.096926 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:08:51 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:08:51.098701 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:08:56 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:08:56.099861 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:09:01 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:09:01.101159 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:09:06 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:09:06.102115 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:09:11 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:09:11.103349 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:09:16 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:09:16.105361 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:09:21 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:09:21.107110 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:09:26 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:09:26.108378 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:09:31 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:09:31.109797 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:09:36 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:09:36.111432 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:09:41 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:09:41.112711 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 16 23:09:46 offline-containerd-20220516230448-297512 kubelet[1289]: E0516 23:09:46.114159 1289 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p offline-containerd-20220516230448-297512 -n offline-containerd-20220516230448-297512
=== CONT TestOffline
helpers_test.go:261: (dbg) Run: kubectl --context offline-containerd-20220516230448-297512 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-826rd storage-provisioner
helpers_test.go:272: ======> post-mortem[TestOffline]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context offline-containerd-20220516230448-297512 describe pod coredns-64897985d-826rd storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context offline-containerd-20220516230448-297512 describe pod coredns-64897985d-826rd storage-provisioner: exit status 1 (86.426342ms)
** stderr **
Error from server (NotFound): pods "coredns-64897985d-826rd" not found
Error from server (NotFound): pods "storage-provisioner" not found
** /stderr **
helpers_test.go:277: kubectl --context offline-containerd-20220516230448-297512 describe pod coredns-64897985d-826rd storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "offline-containerd-20220516230448-297512" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p offline-containerd-20220516230448-297512
=== CONT TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20220516230448-297512: (3.250649887s)
--- FAIL: TestOffline (302.25s)