=== RUN TestOffline
=== PAUSE TestOffline
=== CONT TestOffline
=== CONT TestOffline
aab_offline_test.go:55: (dbg) Run: out/minikube-linux-amd64 start -p offline-containerd-20220512000808-1124136 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker --container-runtime=containerd
=== CONT TestOffline
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p offline-containerd-20220512000808-1124136 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker --container-runtime=containerd: exit status 80 (8m20.55806277s)
-- stdout --
* [offline-containerd-20220512000808-1124136] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=13639
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the docker driver based on user configuration
* Using Docker driver with the root privilege
* Starting control plane node offline-containerd-20220512000808-1124136 in cluster offline-containerd-20220512000808-1124136
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2048MB) ...
* docker "offline-containerd-20220512000808-1124136" container is missing, will recreate.
* Creating docker container (CPUs=2, Memory=2048MB) ...
* Found network options:
- HTTP_PROXY=172.16.1.1:1
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.23.5 on containerd 1.6.4 ...
- env HTTP_PROXY=172.16.1.1:1
- kubelet.cni-conf-dir=/etc/cni/net.mk
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
-- /stdout --
** stderr **
I0512 00:08:09.046508 1250292 out.go:296] Setting OutFile to fd 1 ...
I0512 00:08:09.046713 1250292 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0512 00:08:09.046725 1250292 out.go:309] Setting ErrFile to fd 2...
I0512 00:08:09.046732 1250292 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0512 00:08:09.046896 1250292 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/bin
I0512 00:08:09.047286 1250292 out.go:303] Setting JSON to false
I0512 00:08:09.049280 1250292 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":46243,"bootTime":1652267846,"procs":814,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0512 00:08:09.049369 1250292 start.go:125] virtualization: kvm guest
I0512 00:08:09.052358 1250292 out.go:177] * [offline-containerd-20220512000808-1124136] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
I0512 00:08:09.055557 1250292 out.go:177] - MINIKUBE_LOCATION=13639
I0512 00:08:09.054686 1250292 notify.go:193] Checking for updates...
I0512 00:08:09.058601 1250292 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0512 00:08:09.061161 1250292 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
I0512 00:08:09.064063 1250292 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
I0512 00:08:09.067218 1250292 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0512 00:08:09.069855 1250292 driver.go:358] Setting default libvirt URI to qemu:///system
I0512 00:08:09.128930 1250292 docker.go:137] docker version: linux-20.10.15
I0512 00:08:09.129061 1250292 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0512 00:08:09.266800 1250292 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:70 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:38 SystemTime:2022-05-12 00:08:09.164496288 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0512 00:08:09.266895 1250292 docker.go:254] overlay module found
I0512 00:08:09.269733 1250292 out.go:177] * Using the docker driver based on user configuration
I0512 00:08:09.271188 1250292 start.go:284] selected driver: docker
I0512 00:08:09.271206 1250292 start.go:801] validating driver "docker" against <nil>
I0512 00:08:09.271232 1250292 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0512 00:08:09.272035 1250292 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0512 00:08:09.403174 1250292 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:70 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:39 SystemTime:2022-05-12 00:08:09.30605389 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0512 00:08:09.403393 1250292 start_flags.go:292] no existing cluster config was found, will generate one from the flags
I0512 00:08:09.403629 1250292 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0512 00:08:09.406027 1250292 out.go:177] * Using Docker driver with the root privilege
I0512 00:08:09.407929 1250292 cni.go:95] Creating CNI manager for ""
I0512 00:08:09.407955 1250292 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0512 00:08:09.407969 1250292 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I0512 00:08:09.407978 1250292 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I0512 00:08:09.407983 1250292 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
I0512 00:08:09.407998 1250292 start_flags.go:306] config:
{Name:offline-containerd-20220512000808-1124136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:offline-containerd-20220512000808-1124136 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0512 00:08:09.410069 1250292 out.go:177] * Starting control plane node offline-containerd-20220512000808-1124136 in cluster offline-containerd-20220512000808-1124136
I0512 00:08:09.411521 1250292 cache.go:120] Beginning downloading kic base image for docker with containerd
I0512 00:08:09.413050 1250292 out.go:177] * Pulling base image ...
I0512 00:08:09.414527 1250292 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
I0512 00:08:09.414573 1250292 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4
I0512 00:08:09.414591 1250292 cache.go:57] Caching tarball of preloaded images
I0512 00:08:09.414634 1250292 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
I0512 00:08:09.414850 1250292 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0512 00:08:09.414878 1250292 cache.go:60] Finished verifying existence of preloaded tar for v1.23.5 on containerd
I0512 00:08:09.415320 1250292 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/config.json ...
I0512 00:08:09.415358 1250292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/config.json: {Name:mkab33116db308095dba156fab87fafcdc35ebe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 00:08:09.465491 1250292 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
I0512 00:08:09.465521 1250292 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
I0512 00:08:09.465537 1250292 cache.go:206] Successfully downloaded all kic artifacts
I0512 00:08:09.465582 1250292 start.go:352] acquiring machines lock for offline-containerd-20220512000808-1124136: {Name:mk28e1917184abc563dc02836aa3a58caeb35d9e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0512 00:08:09.465727 1250292 start.go:356] acquired machines lock for "offline-containerd-20220512000808-1124136" in 121.672µs
I0512 00:08:09.465758 1250292 start.go:91] Provisioning new machine with config: &{Name:offline-containerd-20220512000808-1124136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:offline-containerd-20220512000
808-1124136 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0512 00:08:09.465870 1250292 start.go:131] createHost starting for "" (driver="docker")
I0512 00:08:09.468549 1250292 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
I0512 00:08:09.468853 1250292 start.go:165] libmachine.API.Create for "offline-containerd-20220512000808-1124136" (driver="docker")
I0512 00:08:09.468894 1250292 client.go:168] LocalClient.Create starting
I0512 00:08:09.468966 1250292 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem
I0512 00:08:09.469004 1250292 main.go:134] libmachine: Decoding PEM data...
I0512 00:08:09.469030 1250292 main.go:134] libmachine: Parsing certificate...
I0512 00:08:09.469110 1250292 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem
I0512 00:08:09.469142 1250292 main.go:134] libmachine: Decoding PEM data...
I0512 00:08:09.469162 1250292 main.go:134] libmachine: Parsing certificate...
I0512 00:08:09.469842 1250292 cli_runner.go:164] Run: docker network inspect offline-containerd-20220512000808-1124136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0512 00:08:09.506018 1250292 cli_runner.go:211] docker network inspect offline-containerd-20220512000808-1124136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0512 00:08:09.506118 1250292 network_create.go:272] running [docker network inspect offline-containerd-20220512000808-1124136] to gather additional debugging logs...
I0512 00:08:09.506146 1250292 cli_runner.go:164] Run: docker network inspect offline-containerd-20220512000808-1124136
W0512 00:08:09.541950 1250292 cli_runner.go:211] docker network inspect offline-containerd-20220512000808-1124136 returned with exit code 1
I0512 00:08:09.541996 1250292 network_create.go:275] error running [docker network inspect offline-containerd-20220512000808-1124136]: docker network inspect offline-containerd-20220512000808-1124136: exit status 1
stdout:
[]
stderr:
Error: No such network: offline-containerd-20220512000808-1124136
I0512 00:08:09.542030 1250292 network_create.go:277] output of [docker network inspect offline-containerd-20220512000808-1124136]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: offline-containerd-20220512000808-1124136
** /stderr **
I0512 00:08:09.542109 1250292 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0512 00:08:09.580776 1250292 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000a2bfe0] misses:0}
I0512 00:08:09.580845 1250292 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0512 00:08:09.580868 1250292 network_create.go:115] attempt to create docker network offline-containerd-20220512000808-1124136 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0512 00:08:09.580929 1250292 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-containerd-20220512000808-1124136
I0512 00:08:09.688427 1250292 network_create.go:99] docker network offline-containerd-20220512000808-1124136 192.168.49.0/24 created
I0512 00:08:09.688480 1250292 kic.go:106] calculated static IP "192.168.49.2" for the "offline-containerd-20220512000808-1124136" container
I0512 00:08:09.688554 1250292 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0512 00:08:09.726549 1250292 cli_runner.go:164] Run: docker volume create offline-containerd-20220512000808-1124136 --label name.minikube.sigs.k8s.io=offline-containerd-20220512000808-1124136 --label created_by.minikube.sigs.k8s.io=true
I0512 00:08:09.775419 1250292 oci.go:103] Successfully created a docker volume offline-containerd-20220512000808-1124136
I0512 00:08:09.775511 1250292 cli_runner.go:164] Run: docker run --rm --name offline-containerd-20220512000808-1124136-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-containerd-20220512000808-1124136 --entrypoint /usr/bin/test -v offline-containerd-20220512000808-1124136:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
I0512 00:08:10.734177 1250292 oci.go:107] Successfully prepared a docker volume offline-containerd-20220512000808-1124136
I0512 00:08:10.734223 1250292 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
I0512 00:08:10.734245 1250292 kic.go:179] Starting extracting preloaded images to volume ...
I0512 00:08:10.734316 1250292 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-containerd-20220512000808-1124136:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
I0512 00:08:25.547740 1250292 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-containerd-20220512000808-1124136:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (14.813331205s)
I0512 00:08:25.547778 1250292 kic.go:188] duration metric: took 14.813528 seconds to extract preloaded images to volume
W0512 00:08:25.547941 1250292 cgroups_linux.go:88] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0512 00:08:25.548082 1250292 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0512 00:08:25.693690 1250292 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname offline-containerd-20220512000808-1124136 --name offline-containerd-20220512000808-1124136 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-containerd-20220512000808-1124136 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=offline-containerd-20220512000808-1124136 --network offline-containerd-20220512000808-1124136 --ip 192.168.49.2 --volume offline-containerd-20220512000808-1124136:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
W0512 00:08:25.784943 1250292 cli_runner.go:211] docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname offline-containerd-20220512000808-1124136 --name offline-containerd-20220512000808-1124136 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-containerd-20220512000808-1124136 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=offline-containerd-20220512000808-1124136 --network offline-containerd-20220512000808-1124136 --ip 192.168.49.2 --volume offline-containerd-20220512000808-1124136:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a returned with exit code 125
I0512 00:08:25.785004 1250292 client.go:171] LocalClient.Create took 16.316098778s
I0512 00:08:27.785872 1250292 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0512 00:08:27.785945 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
W0512 00:08:27.824491 1250292 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136 returned with exit code 1
I0512 00:08:27.824601 1250292 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0512 00:08:28.101861 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
W0512 00:08:28.138599 1250292 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136 returned with exit code 1
I0512 00:08:28.138734 1250292 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0512 00:08:28.679486 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
W0512 00:08:28.717007 1250292 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136 returned with exit code 1
I0512 00:08:28.717109 1250292 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0512 00:08:29.372945 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
W0512 00:08:29.410986 1250292 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136 returned with exit code 1
W0512 00:08:29.411147 1250292 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
W0512 00:08:29.411173 1250292 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0512 00:08:29.411227 1250292 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0512 00:08:29.411279 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
W0512 00:08:29.446009 1250292 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136 returned with exit code 1
I0512 00:08:29.446103 1250292 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0512 00:08:29.677535 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
W0512 00:08:29.710128 1250292 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136 returned with exit code 1
I0512 00:08:29.710239 1250292 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0512 00:08:30.155887 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
W0512 00:08:30.190019 1250292 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136 returned with exit code 1
I0512 00:08:30.190135 1250292 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0512 00:08:30.508488 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
W0512 00:08:30.540427 1250292 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136 returned with exit code 1
I0512 00:08:30.540548 1250292 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0512 00:08:31.095376 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
W0512 00:08:31.129736 1250292 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136 returned with exit code 1
W0512 00:08:31.129842 1250292 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
W0512 00:08:31.129858 1250292 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0512 00:08:31.129865 1250292 start.go:134] duration metric: createHost completed in 21.663987827s
I0512 00:08:31.129880 1250292 start.go:81] releasing machines lock for "offline-containerd-20220512000808-1124136", held for 21.664137018s
W0512 00:08:31.129911 1250292 start.go:608] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname offline-containerd-20220512000808-1124136 --name offline-containerd-20220512000808-1124136 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-containerd-20220512000808-1124136 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=offline-containerd-20220512000808-1124136 --network offline-containerd-20220512000808-1124136 --ip 192.168.49.2 --volume offline-containerd-20220512000808-1124136:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2
afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: exit status 125
stdout:
907a54b7bca5a537c6695e4432700d6f46869d4f88a56aa6140ce91a152a480b
stderr:
docker: Error response from daemon: network offline-containerd-20220512000808-1124136 not found.
I0512 00:08:31.130326 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
W0512 00:08:31.162096 1250292 start.go:613] delete host: Docker machine "offline-containerd-20220512000808-1124136" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
W0512 00:08:31.162322 1250292 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname offline-containerd-20220512000808-1124136 --name offline-containerd-20220512000808-1124136 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-containerd-20220512000808-1124136 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=offline-containerd-20220512000808-1124136 --network offline-containerd-20220512000808-1124136 --ip 192.168.49.2 --volume offline-containerd-20220512000808-1124136:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@
sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: exit status 125
stdout:
907a54b7bca5a537c6695e4432700d6f46869d4f88a56aa6140ce91a152a480b
stderr:
docker: Error response from daemon: network offline-containerd-20220512000808-1124136 not found.
! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname offline-containerd-20220512000808-1124136 --name offline-containerd-20220512000808-1124136 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-containerd-20220512000808-1124136 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=offline-containerd-20220512000808-1124136 --network offline-containerd-20220512000808-1124136 --ip 192.168.49.2 --volume offline-containerd-20220512000808-1124136:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7
e788283deecc83c8633014fb0828a: exit status 125
stdout:
907a54b7bca5a537c6695e4432700d6f46869d4f88a56aa6140ce91a152a480b
stderr:
docker: Error response from daemon: network offline-containerd-20220512000808-1124136 not found.
I0512 00:08:31.162345 1250292 start.go:623] Will try again in 5 seconds ...
I0512 00:08:36.162516 1250292 start.go:352] acquiring machines lock for offline-containerd-20220512000808-1124136: {Name:mk28e1917184abc563dc02836aa3a58caeb35d9e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0512 00:08:36.162644 1250292 start.go:356] acquired machines lock for "offline-containerd-20220512000808-1124136" in 85.657µs
I0512 00:08:36.162674 1250292 start.go:94] Skipping create...Using existing machine configuration
I0512 00:08:36.162684 1250292 fix.go:55] fixHost starting:
I0512 00:08:36.162921 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:08:36.199299 1250292 fix.go:103] recreateIfNeeded on offline-containerd-20220512000808-1124136: state= err=<nil>
I0512 00:08:36.199345 1250292 fix.go:108] machineExists: false. err=machine does not exist
I0512 00:08:36.202086 1250292 out.go:177] * docker "offline-containerd-20220512000808-1124136" container is missing, will recreate.
I0512 00:08:36.203500 1250292 delete.go:124] DEMOLISHING offline-containerd-20220512000808-1124136 ...
I0512 00:08:36.203589 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:08:36.241842 1250292 stop.go:79] host is in state
I0512 00:08:36.241909 1250292 main.go:134] libmachine: Stopping "offline-containerd-20220512000808-1124136"...
I0512 00:08:36.241987 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:08:36.277922 1250292 kic_runner.go:93] Run: systemctl --version
I0512 00:08:36.277951 1250292 kic_runner.go:114] Args: [docker exec --privileged offline-containerd-20220512000808-1124136 systemctl --version]
I0512 00:08:36.316102 1250292 kic_runner.go:93] Run: sudo service kubelet stop
I0512 00:08:36.316128 1250292 kic_runner.go:114] Args: [docker exec --privileged offline-containerd-20220512000808-1124136 sudo service kubelet stop]
I0512 00:08:36.354921 1250292 openrc.go:165] stop output:
** stderr **
Error response from daemon: Container 907a54b7bca5a537c6695e4432700d6f46869d4f88a56aa6140ce91a152a480b is not running
** /stderr **
W0512 00:08:36.354943 1250292 kic.go:439] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
stdout:
stderr:
Error response from daemon: Container 907a54b7bca5a537c6695e4432700d6f46869d4f88a56aa6140ce91a152a480b is not running
I0512 00:08:36.354999 1250292 kic_runner.go:93] Run: sudo service kubelet stop
I0512 00:08:36.355013 1250292 kic_runner.go:114] Args: [docker exec --privileged offline-containerd-20220512000808-1124136 sudo service kubelet stop]
I0512 00:08:36.391688 1250292 openrc.go:165] stop output:
** stderr **
Error response from daemon: Container 907a54b7bca5a537c6695e4432700d6f46869d4f88a56aa6140ce91a152a480b is not running
** /stderr **
W0512 00:08:36.391715 1250292 kic.go:441] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
stdout:
stderr:
Error response from daemon: Container 907a54b7bca5a537c6695e4432700d6f46869d4f88a56aa6140ce91a152a480b is not running
I0512 00:08:36.391746 1250292 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
I0512 00:08:36.391830 1250292 kic_runner.go:93] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
I0512 00:08:36.391856 1250292 kic_runner.go:114] Args: [docker exec --privileged offline-containerd-20220512000808-1124136 sudo -s eval crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator]
I0512 00:08:36.428911 1250292 kic.go:452] unable list containers : crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": exit status 1
stdout:
stderr:
Error response from daemon: Container 907a54b7bca5a537c6695e4432700d6f46869d4f88a56aa6140ce91a152a480b is not running
I0512 00:08:36.428935 1250292 kic.go:462] successfully stopped kubernetes!
I0512 00:08:36.428985 1250292 kic_runner.go:93] Run: pgrep kube-apiserver
I0512 00:08:36.428996 1250292 kic_runner.go:114] Args: [docker exec --privileged offline-containerd-20220512000808-1124136 pgrep kube-apiserver]
I0512 00:08:36.504756 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:08:39.539614 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:08:42.572787 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:08:45.607248 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:08:48.644539 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:08:51.696831 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:08:54.727385 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:08:57.764824 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:09:00.800838 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:09:03.835279 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:09:06.870936 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:09:09.903473 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:09:12.936811 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:09:15.979495 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:09:19.017654 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:09:22.048815 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:09:25.084179 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:09:28.121816 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:09:31.156821 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:09:34.190560 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:09:37.224097 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:09:40.265547 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:09:43.304822 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:09:46.343658 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:09:49.376830 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:09:52.416782 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:09:55.449517 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:09:58.486044 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:10:01.520797 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:10:04.573403 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:10:07.607870 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:10:10.641602 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:10:13.677159 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:10:16.715014 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:10:19.750585 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:10:22.789304 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:10:25.823967 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:10:28.865065 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:10:31.913998 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:10:34.949955 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:10:37.988791 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:10:41.028161 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:10:44.069434 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:10:47.108831 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:10:50.144522 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:10:53.180841 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:10:56.219099 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:10:59.258096 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:11:02.328811 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:11:05.368828 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:11:08.405666 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:11:11.445081 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:11:14.483192 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:11:17.523317 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:11:20.562440 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:11:23.602410 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:11:26.642707 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:11:29.685561 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:11:32.720849 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:11:35.755513 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:11:38.792731 1250292 stop.go:59] stop err: Maximum number of retries (60) exceeded
I0512 00:11:38.792800 1250292 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
I0512 00:11:38.793307 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
W0512 00:11:38.829389 1250292 delete.go:135] deletehost failed: Docker machine "offline-containerd-20220512000808-1124136" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
I0512 00:11:38.829506 1250292 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-containerd-20220512000808-1124136
I0512 00:11:38.865311 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:11:38.900767 1250292 cli_runner.go:164] Run: docker exec --privileged -t offline-containerd-20220512000808-1124136 /bin/bash -c "sudo init 0"
W0512 00:11:38.938217 1250292 cli_runner.go:211] docker exec --privileged -t offline-containerd-20220512000808-1124136 /bin/bash -c "sudo init 0" returned with exit code 1
I0512 00:11:38.938259 1250292 oci.go:625] error shutdown offline-containerd-20220512000808-1124136: docker exec --privileged -t offline-containerd-20220512000808-1124136 /bin/bash -c "sudo init 0": exit status 1
stdout:
stderr:
Error response from daemon: Container 907a54b7bca5a537c6695e4432700d6f46869d4f88a56aa6140ce91a152a480b is not running
I0512 00:11:39.938436 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:11:39.977046 1250292 oci.go:639] temporary error: container offline-containerd-20220512000808-1124136 status is but expect it to be exited
I0512 00:11:39.977080 1250292 oci.go:645] Successfully shutdown container offline-containerd-20220512000808-1124136
I0512 00:11:39.977122 1250292 cli_runner.go:164] Run: docker rm -f -v offline-containerd-20220512000808-1124136
I0512 00:11:40.035203 1250292 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-containerd-20220512000808-1124136
W0512 00:11:40.074516 1250292 cli_runner.go:211] docker container inspect -f {{.Id}} offline-containerd-20220512000808-1124136 returned with exit code 1
I0512 00:11:40.074624 1250292 cli_runner.go:164] Run: docker network inspect offline-containerd-20220512000808-1124136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0512 00:11:40.108908 1250292 cli_runner.go:211] docker network inspect offline-containerd-20220512000808-1124136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0512 00:11:40.109022 1250292 network_create.go:272] running [docker network inspect offline-containerd-20220512000808-1124136] to gather additional debugging logs...
I0512 00:11:40.109052 1250292 cli_runner.go:164] Run: docker network inspect offline-containerd-20220512000808-1124136
W0512 00:11:40.140766 1250292 cli_runner.go:211] docker network inspect offline-containerd-20220512000808-1124136 returned with exit code 1
I0512 00:11:40.140803 1250292 network_create.go:275] error running [docker network inspect offline-containerd-20220512000808-1124136]: docker network inspect offline-containerd-20220512000808-1124136: exit status 1
stdout:
[]
stderr:
Error: No such network: offline-containerd-20220512000808-1124136
I0512 00:11:40.140822 1250292 network_create.go:277] output of [docker network inspect offline-containerd-20220512000808-1124136]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: offline-containerd-20220512000808-1124136
** /stderr **
W0512 00:11:40.140975 1250292 delete.go:139] delete failed (probably ok) <nil>
I0512 00:11:40.140988 1250292 fix.go:115] Sleeping 1 second for extra luck!
I0512 00:11:41.141845 1250292 start.go:131] createHost starting for "" (driver="docker")
I0512 00:11:41.144312 1250292 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
I0512 00:11:41.144481 1250292 start.go:165] libmachine.API.Create for "offline-containerd-20220512000808-1124136" (driver="docker")
I0512 00:11:41.144534 1250292 client.go:168] LocalClient.Create starting
I0512 00:11:41.144640 1250292 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem
I0512 00:11:41.144710 1250292 main.go:134] libmachine: Decoding PEM data...
I0512 00:11:41.144738 1250292 main.go:134] libmachine: Parsing certificate...
I0512 00:11:41.144819 1250292 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem
I0512 00:11:41.144851 1250292 main.go:134] libmachine: Decoding PEM data...
I0512 00:11:41.144874 1250292 main.go:134] libmachine: Parsing certificate...
I0512 00:11:41.145195 1250292 cli_runner.go:164] Run: docker network inspect offline-containerd-20220512000808-1124136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0512 00:11:41.181537 1250292 cli_runner.go:211] docker network inspect offline-containerd-20220512000808-1124136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0512 00:11:41.181630 1250292 network_create.go:272] running [docker network inspect offline-containerd-20220512000808-1124136] to gather additional debugging logs...
I0512 00:11:41.181654 1250292 cli_runner.go:164] Run: docker network inspect offline-containerd-20220512000808-1124136
W0512 00:11:41.227307 1250292 cli_runner.go:211] docker network inspect offline-containerd-20220512000808-1124136 returned with exit code 1
I0512 00:11:41.227342 1250292 network_create.go:275] error running [docker network inspect offline-containerd-20220512000808-1124136]: docker network inspect offline-containerd-20220512000808-1124136: exit status 1
stdout:
[]
stderr:
Error: No such network: offline-containerd-20220512000808-1124136
I0512 00:11:41.227357 1250292 network_create.go:277] output of [docker network inspect offline-containerd-20220512000808-1124136]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: offline-containerd-20220512000808-1124136
** /stderr **
I0512 00:11:41.227398 1250292 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0512 00:11:41.264334 1250292 network.go:284] reusing subnet 192.168.49.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a2bfe0] amended:false}} dirty:map[] misses:0}
I0512 00:11:41.264388 1250292 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0512 00:11:41.264409 1250292 network_create.go:115] attempt to create docker network offline-containerd-20220512000808-1124136 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0512 00:11:41.264470 1250292 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-containerd-20220512000808-1124136
W0512 00:11:41.316489 1250292 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-containerd-20220512000808-1124136 returned with exit code 1
W0512 00:11:41.316538 1250292 network_create.go:107] failed to create docker network offline-containerd-20220512000808-1124136 192.168.49.0/24, will retry: subnet is taken
I0512 00:11:41.317371 1250292 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-34088cb6720d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:10:97:05:8e}}
I0512 00:11:41.318271 1250292 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a2bfe0] amended:true}} dirty:map[192.168.49.0:0xc000a2bfe0 192.168.58.0:0xc000c062a0] misses:0}
I0512 00:11:41.318329 1250292 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0512 00:11:41.318346 1250292 network_create.go:115] attempt to create docker network offline-containerd-20220512000808-1124136 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0512 00:11:41.318401 1250292 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-containerd-20220512000808-1124136
I0512 00:11:41.406982 1250292 network_create.go:99] docker network offline-containerd-20220512000808-1124136 192.168.58.0/24 created
I0512 00:11:41.407029 1250292 kic.go:106] calculated static IP "192.168.58.2" for the "offline-containerd-20220512000808-1124136" container
I0512 00:11:41.407099 1250292 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0512 00:11:41.444814 1250292 cli_runner.go:164] Run: docker volume create offline-containerd-20220512000808-1124136 --label name.minikube.sigs.k8s.io=offline-containerd-20220512000808-1124136 --label created_by.minikube.sigs.k8s.io=true
I0512 00:11:41.480137 1250292 oci.go:103] Successfully created a docker volume offline-containerd-20220512000808-1124136
I0512 00:11:41.480225 1250292 cli_runner.go:164] Run: docker run --rm --name offline-containerd-20220512000808-1124136-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-containerd-20220512000808-1124136 --entrypoint /usr/bin/test -v offline-containerd-20220512000808-1124136:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
I0512 00:11:42.202205 1250292 oci.go:107] Successfully prepared a docker volume offline-containerd-20220512000808-1124136
I0512 00:11:42.202263 1250292 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
I0512 00:11:42.202298 1250292 kic.go:179] Starting extracting preloaded images to volume ...
I0512 00:11:42.202377 1250292 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-containerd-20220512000808-1124136:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
I0512 00:11:59.489037 1250292 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-containerd-20220512000808-1124136:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (17.286583615s)
I0512 00:11:59.489072 1250292 kic.go:188] duration metric: took 17.286772 seconds to extract preloaded images to volume
W0512 00:11:59.489189 1250292 cgroups_linux.go:88] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0512 00:11:59.489269 1250292 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0512 00:11:59.647321 1250292 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname offline-containerd-20220512000808-1124136 --name offline-containerd-20220512000808-1124136 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-containerd-20220512000808-1124136 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=offline-containerd-20220512000808-1124136 --network offline-containerd-20220512000808-1124136 --ip 192.168.58.2 --volume offline-containerd-20220512000808-1124136:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
I0512 00:12:00.229234 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Running}}
I0512 00:12:00.272474 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:12:00.307080 1250292 cli_runner.go:164] Run: docker exec offline-containerd-20220512000808-1124136 stat /var/lib/dpkg/alternatives/iptables
I0512 00:12:00.415640 1250292 oci.go:247] the created container "offline-containerd-20220512000808-1124136" has a running status.
I0512 00:12:00.415681 1250292 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/offline-containerd-20220512000808-1124136/id_rsa...
I0512 00:12:01.080791 1250292 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/offline-containerd-20220512000808-1124136/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0512 00:12:01.267365 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:12:01.322423 1250292 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0512 00:12:01.322448 1250292 kic_runner.go:114] Args: [docker exec --privileged offline-containerd-20220512000808-1124136 chown docker:docker /home/docker/.ssh/authorized_keys]
I0512 00:12:01.479918 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:12:01.519161 1250292 machine.go:88] provisioning docker machine ...
I0512 00:12:01.519200 1250292 ubuntu.go:169] provisioning hostname "offline-containerd-20220512000808-1124136"
I0512 00:12:01.519269 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
I0512 00:12:01.554822 1250292 main.go:134] libmachine: Using SSH client type: native
I0512 00:12:01.555030 1250292 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil> [] 0s} 127.0.0.1 50156 <nil> <nil>}
I0512 00:12:01.555057 1250292 main.go:134] libmachine: About to run SSH command:
sudo hostname offline-containerd-20220512000808-1124136 && echo "offline-containerd-20220512000808-1124136" | sudo tee /etc/hostname
I0512 00:12:01.750107 1250292 main.go:134] libmachine: SSH cmd err, output: <nil>: offline-containerd-20220512000808-1124136
I0512 00:12:01.750200 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
I0512 00:12:01.781791 1250292 main.go:134] libmachine: Using SSH client type: native
I0512 00:12:01.781953 1250292 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil> [] 0s} 127.0.0.1 50156 <nil> <nil>}
I0512 00:12:01.781975 1250292 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\soffline-containerd-20220512000808-1124136' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 offline-containerd-20220512000808-1124136/g' /etc/hosts;
else
echo '127.0.1.1 offline-containerd-20220512000808-1124136' | sudo tee -a /etc/hosts;
fi
fi
I0512 00:12:01.897366 1250292 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0512 00:12:01.897399 1250292 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3
050148/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube}
I0512 00:12:01.897434 1250292 ubuntu.go:177] setting up certificates
I0512 00:12:01.897449 1250292 provision.go:83] configureAuth start
I0512 00:12:01.897514 1250292 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" offline-containerd-20220512000808-1124136
I0512 00:12:01.941294 1250292 provision.go:138] copyHostCerts
I0512 00:12:01.941366 1250292 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem, removing ...
I0512 00:12:01.941383 1250292 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem
I0512 00:12:01.941460 1250292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem (1082 bytes)
I0512 00:12:01.941557 1250292 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem, removing ...
I0512 00:12:01.941571 1250292 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem
I0512 00:12:01.941607 1250292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem (1123 bytes)
I0512 00:12:01.941683 1250292 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem, removing ...
I0512 00:12:01.941696 1250292 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem
I0512 00:12:01.941729 1250292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem (1679 bytes)
I0512 00:12:01.941791 1250292 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem org=jenkins.offline-containerd-20220512000808-1124136 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube offline-containerd-20220512000808-1124136]
I0512 00:12:02.057181 1250292 provision.go:172] copyRemoteCerts
I0512 00:12:02.057254 1250292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0512 00:12:02.057303 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
I0512 00:12:02.096439 1250292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50156 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/offline-containerd-20220512000808-1124136/id_rsa Username:docker}
I0512 00:12:02.180243 1250292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem --> /etc/docker/server.pem (1294 bytes)
I0512 00:12:02.199680 1250292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0512 00:12:02.218299 1250292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0512 00:12:02.235793 1250292 provision.go:86] duration metric: configureAuth took 338.329346ms
I0512 00:12:02.235826 1250292 ubuntu.go:193] setting minikube options for container-runtime
I0512 00:12:02.235999 1250292 config.go:178] Loaded profile config "offline-containerd-20220512000808-1124136": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
I0512 00:12:02.236013 1250292 machine.go:91] provisioned docker machine in 716.830594ms
I0512 00:12:02.236020 1250292 client.go:171] LocalClient.Create took 21.091473549s
I0512 00:12:02.236039 1250292 start.go:173] duration metric: libmachine.API.Create for "offline-containerd-20220512000808-1124136" took 21.091559951s
I0512 00:12:02.236061 1250292 start.go:306] post-start starting for "offline-containerd-20220512000808-1124136" (driver="docker")
I0512 00:12:02.236073 1250292 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0512 00:12:02.236122 1250292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0512 00:12:02.236156 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
I0512 00:12:02.270720 1250292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50156 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/offline-containerd-20220512000808-1124136/id_rsa Username:docker}
I0512 00:12:02.356184 1250292 ssh_runner.go:195] Run: cat /etc/os-release
I0512 00:12:02.358944 1250292 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0512 00:12:02.358972 1250292 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0512 00:12:02.358986 1250292 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0512 00:12:02.358999 1250292 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0512 00:12:02.359016 1250292 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/addons for local assets ...
I0512 00:12:02.359073 1250292 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files for local assets ...
I0512 00:12:02.359153 1250292 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/11241362.pem -> 11241362.pem in /etc/ssl/certs
I0512 00:12:02.359262 1250292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0512 00:12:02.366306 1250292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/11241362.pem --> /etc/ssl/certs/11241362.pem (1708 bytes)
I0512 00:12:02.384438 1250292 start.go:309] post-start completed in 148.35541ms
I0512 00:12:02.384883 1250292 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" offline-containerd-20220512000808-1124136
I0512 00:12:02.420111 1250292 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/config.json ...
I0512 00:12:02.420383 1250292 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0512 00:12:02.420439 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
I0512 00:12:02.457761 1250292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50156 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/offline-containerd-20220512000808-1124136/id_rsa Username:docker}
I0512 00:12:02.537127 1250292 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0512 00:12:02.541405 1250292 start.go:134] duration metric: createHost completed in 21.399523803s
I0512 00:12:02.541495 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
W0512 00:12:02.573467 1250292 fix.go:129] unexpected machine state, will restart: <nil>
I0512 00:12:02.573520 1250292 machine.go:88] provisioning docker machine ...
I0512 00:12:02.573543 1250292 ubuntu.go:169] provisioning hostname "offline-containerd-20220512000808-1124136"
I0512 00:12:02.573603 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
I0512 00:12:02.605896 1250292 main.go:134] libmachine: Using SSH client type: native
I0512 00:12:02.606110 1250292 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil> [] 0s} 127.0.0.1 50156 <nil> <nil>}
I0512 00:12:02.606139 1250292 main.go:134] libmachine: About to run SSH command:
sudo hostname offline-containerd-20220512000808-1124136 && echo "offline-containerd-20220512000808-1124136" | sudo tee /etc/hostname
I0512 00:12:02.725409 1250292 main.go:134] libmachine: SSH cmd err, output: <nil>: offline-containerd-20220512000808-1124136
I0512 00:12:02.725484 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
I0512 00:12:02.759468 1250292 main.go:134] libmachine: Using SSH client type: native
I0512 00:12:02.759692 1250292 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil> [] 0s} 127.0.0.1 50156 <nil> <nil>}
I0512 00:12:02.759731 1250292 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\soffline-containerd-20220512000808-1124136' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 offline-containerd-20220512000808-1124136/g' /etc/hosts;
else
echo '127.0.1.1 offline-containerd-20220512000808-1124136' | sudo tee -a /etc/hosts;
fi
fi
I0512 00:12:02.872647 1250292 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0512 00:12:02.872694 1250292 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3
050148/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube}
I0512 00:12:02.872722 1250292 ubuntu.go:177] setting up certificates
I0512 00:12:02.872732 1250292 provision.go:83] configureAuth start
I0512 00:12:02.872786 1250292 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" offline-containerd-20220512000808-1124136
I0512 00:12:02.906769 1250292 provision.go:138] copyHostCerts
I0512 00:12:02.906857 1250292 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem, removing ...
I0512 00:12:02.906878 1250292 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem
I0512 00:12:02.906937 1250292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem (1123 bytes)
I0512 00:12:02.907053 1250292 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem, removing ...
I0512 00:12:02.907073 1250292 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem
I0512 00:12:02.907105 1250292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem (1679 bytes)
I0512 00:12:02.907184 1250292 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem, removing ...
I0512 00:12:02.907199 1250292 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem
I0512 00:12:02.907227 1250292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem (1082 bytes)
I0512 00:12:02.907293 1250292 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem org=jenkins.offline-containerd-20220512000808-1124136 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube offline-containerd-20220512000808-1124136]
I0512 00:12:03.020224 1250292 provision.go:172] copyRemoteCerts
I0512 00:12:03.020296 1250292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0512 00:12:03.020347 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
I0512 00:12:03.057180 1250292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50156 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/offline-containerd-20220512000808-1124136/id_rsa Username:docker}
I0512 00:12:03.139945 1250292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0512 00:12:03.157321 1250292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem --> /etc/docker/server.pem (1294 bytes)
I0512 00:12:03.174259 1250292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0512 00:12:03.191750 1250292 provision.go:86] duration metric: configureAuth took 319.000123ms
I0512 00:12:03.191781 1250292 ubuntu.go:193] setting minikube options for container-runtime
I0512 00:12:03.191999 1250292 config.go:178] Loaded profile config "offline-containerd-20220512000808-1124136": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
I0512 00:12:03.192021 1250292 machine.go:91] provisioned docker machine in 618.492455ms
I0512 00:12:03.192032 1250292 start.go:306] post-start starting for "offline-containerd-20220512000808-1124136" (driver="docker")
I0512 00:12:03.192044 1250292 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0512 00:12:03.192103 1250292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0512 00:12:03.192144 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
I0512 00:12:03.226910 1250292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50156 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/offline-containerd-20220512000808-1124136/id_rsa Username:docker}
I0512 00:12:03.316341 1250292 ssh_runner.go:195] Run: cat /etc/os-release
I0512 00:12:03.319202 1250292 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0512 00:12:03.319236 1250292 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0512 00:12:03.319249 1250292 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0512 00:12:03.319262 1250292 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0512 00:12:03.319281 1250292 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/addons for local assets ...
I0512 00:12:03.319336 1250292 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files for local assets ...
I0512 00:12:03.319414 1250292 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/11241362.pem -> 11241362.pem in /etc/ssl/certs
I0512 00:12:03.319534 1250292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0512 00:12:03.326272 1250292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/11241362.pem --> /etc/ssl/certs/11241362.pem (1708 bytes)
I0512 00:12:03.343897 1250292 start.go:309] post-start completed in 151.846103ms
I0512 00:12:03.343978 1250292 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0512 00:12:03.344033 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
I0512 00:12:03.380071 1250292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50156 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/offline-containerd-20220512000808-1124136/id_rsa Username:docker}
I0512 00:12:03.464838 1250292 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0512 00:12:03.468532 1250292 fix.go:57] fixHost completed within 3m27.305842961s
I0512 00:12:03.468553 1250292 start.go:81] releasing machines lock for "offline-containerd-20220512000808-1124136", held for 3m27.305892516s
I0512 00:12:03.468628 1250292 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" offline-containerd-20220512000808-1124136
I0512 00:12:03.509466 1250292 out.go:177] * Found network options:
I0512 00:12:03.511374 1250292 out.go:177] - HTTP_PROXY=172.16.1.1:1
W0512 00:12:03.512925 1250292 out.go:239] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.58.2).
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.58.2).
I0512 00:12:03.514644 1250292 out.go:177] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I0512 00:12:03.516564 1250292 ssh_runner.go:195] Run: sudo service crio stop
I0512 00:12:03.516620 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
I0512 00:12:03.516650 1250292 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0512 00:12:03.516738 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
I0512 00:12:03.549867 1250292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50156 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/offline-containerd-20220512000808-1124136/id_rsa Username:docker}
I0512 00:12:03.557077 1250292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50156 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/offline-containerd-20220512000808-1124136/id_rsa Username:docker}
I0512 00:12:04.001609 1250292 openrc.go:165] stop output:
I0512 00:12:04.001699 1250292 ssh_runner.go:195] Run: sudo service crio status
I0512 00:12:04.019736 1250292 docker.go:187] disabling docker service ...
I0512 00:12:04.019795 1250292 ssh_runner.go:195] Run: sudo service docker.socket stop
I0512 00:12:04.387882 1250292 openrc.go:165] stop output:
** stderr **
Failed to stop docker.socket.service: Unit docker.socket.service not loaded.
** /stderr **
E0512 00:12:04.387914 1250292 docker.go:190] "Failed to stop" err=<
sudo service docker.socket stop: Process exited with status 5
stdout:
stderr:
Failed to stop docker.socket.service: Unit docker.socket.service not loaded.
> service="docker.socket"
I0512 00:12:04.387962 1250292 ssh_runner.go:195] Run: sudo service docker.service stop
I0512 00:12:04.761371 1250292 openrc.go:165] stop output:
** stderr **
Failed to stop docker.service.service: Unit docker.service.service not loaded.
** /stderr **
E0512 00:12:04.761401 1250292 docker.go:193] "Failed to stop" err=<
sudo service docker.service stop: Process exited with status 5
stdout:
stderr:
Failed to stop docker.service.service: Unit docker.service.service not loaded.
> service="docker.service"
W0512 00:12:04.761412 1250292 cruntime.go:284] disable failed: sudo service docker.service stop: Process exited with status 5
stdout:
stderr:
Failed to stop docker.service.service: Unit docker.service.service not loaded.
I0512 00:12:04.761451 1250292 ssh_runner.go:195] Run: sudo service docker status
W0512 00:12:04.779871 1250292 containerd.go:245] disableOthers: Docker is still active
I0512 00:12:04.780050 1250292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0512 00:12:04.794248 1250292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICA
gIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0
gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
I0512 00:12:04.809996 1250292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0512 00:12:04.817475 1250292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0512 00:12:04.826625 1250292 ssh_runner.go:195] Run: sudo service containerd restart
I0512 00:12:04.937530 1250292 openrc.go:152] restart output:
I0512 00:12:04.937566 1250292 start.go:456] Will wait 60s for socket path /run/containerd/containerd.sock
I0512 00:12:04.937642 1250292 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0512 00:12:04.955764 1250292 start.go:477] Will wait 60s for crictl version
I0512 00:12:04.955846 1250292 ssh_runner.go:195] Run: sudo crictl version
I0512 00:12:04.992104 1250292 start.go:486] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.6.4
RuntimeApiVersion: v1alpha2
I0512 00:12:04.992169 1250292 ssh_runner.go:195] Run: containerd --version
I0512 00:12:05.059868 1250292 ssh_runner.go:195] Run: containerd --version
I0512 00:12:05.100713 1250292 out.go:177] * Preparing Kubernetes v1.23.5 on containerd 1.6.4 ...
I0512 00:12:05.102346 1250292 out.go:177] - env HTTP_PROXY=172.16.1.1:1
I0512 00:12:05.103865 1250292 cli_runner.go:164] Run: docker network inspect offline-containerd-20220512000808-1124136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0512 00:12:05.148714 1250292 ssh_runner.go:195] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts
I0512 00:12:05.153471 1250292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0512 00:12:05.167876 1250292 out.go:177] - kubelet.cni-conf-dir=/etc/cni/net.mk
I0512 00:12:05.169404 1250292 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
I0512 00:12:05.169500 1250292 ssh_runner.go:195] Run: sudo crictl images --output json
I0512 00:12:05.199387 1250292 containerd.go:607] all images are preloaded for containerd runtime.
I0512 00:12:05.199413 1250292 containerd.go:521] Images already preloaded, skipping extraction
I0512 00:12:05.199469 1250292 ssh_runner.go:195] Run: sudo crictl images --output json
I0512 00:12:05.230869 1250292 containerd.go:607] all images are preloaded for containerd runtime.
I0512 00:12:05.230901 1250292 cache_images.go:84] Images are preloaded, skipping loading
I0512 00:12:05.230960 1250292 ssh_runner.go:195] Run: sudo crictl info
I0512 00:12:05.261849 1250292 cni.go:95] Creating CNI manager for ""
I0512 00:12:05.261881 1250292 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0512 00:12:05.261906 1250292 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0512 00:12:05.261924 1250292 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:offline-containerd-20220512000808-1124136 NodeName:offline-containerd-20220512000808-1124136 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDri
ver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0512 00:12:05.262106 1250292 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.58.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "offline-containerd-20220512000808-1124136"
kubeletExtraArgs:
node-ip: 192.168.58.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.23.5
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0512 00:12:05.262233 1250292 kubeadm.go:936] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=offline-containerd-20220512000808-1124136 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.23.5 ClusterName:offline-containerd-20220512000808-1124136 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0512 00:12:05.262310 1250292 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
I0512 00:12:05.269846 1250292 binaries.go:44] Found k8s binaries, skipping transfer
I0512 00:12:05.270001 1250292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /var/lib/minikube /etc/init.d
I0512 00:12:05.277111 1250292 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (586 bytes)
I0512 00:12:05.289751 1250292 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0512 00:12:05.302158 1250292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
I0512 00:12:05.315217 1250292 ssh_runner.go:362] scp memory --> /var/lib/minikube/openrc-restart-wrapper.sh (233 bytes)
I0512 00:12:05.327716 1250292 ssh_runner.go:362] scp memory --> /etc/init.d/kubelet (839 bytes)
I0512 00:12:05.340184 1250292 ssh_runner.go:195] Run: grep 192.168.58.2 control-plane.minikube.internal$ /etc/hosts
I0512 00:12:05.342996 1250292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0512 00:12:05.352078 1250292 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136 for IP: 192.168.58.2
I0512 00:12:05.352187 1250292 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.key
I0512 00:12:05.352356 1250292 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.key
I0512 00:12:05.352440 1250292 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/client.key
I0512 00:12:05.352458 1250292 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/client.crt with IP's: []
I0512 00:12:05.649545 1250292 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/client.crt ...
I0512 00:12:05.649591 1250292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/client.crt: {Name:mk5345a3a266d90f46a7c4ffd944c025ff2a9900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 00:12:05.649828 1250292 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/client.key ...
I0512 00:12:05.649846 1250292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/client.key: {Name:mk692c55778caa4d2f81c4029805be188f7de7bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 00:12:05.649979 1250292 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/apiserver.key.cee25041
I0512 00:12:05.650002 1250292 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0512 00:12:05.701631 1250292 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/apiserver.crt.cee25041 ...
I0512 00:12:05.701672 1250292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/apiserver.crt.cee25041: {Name:mk07bfb0c1337621227b7fe571aaaa8b028d54d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 00:12:05.701883 1250292 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/apiserver.key.cee25041 ...
I0512 00:12:05.701902 1250292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/apiserver.key.cee25041: {Name:mk55487fc86a68805e858b05ef3763e7eb6dd8ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 00:12:05.702014 1250292 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/apiserver.crt
I0512 00:12:05.702092 1250292 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/apiserver.key
I0512 00:12:05.702158 1250292 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/proxy-client.key
I0512 00:12:05.702180 1250292 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/proxy-client.crt with IP's: []
I0512 00:12:05.926909 1250292 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/proxy-client.crt ...
I0512 00:12:05.926945 1250292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/proxy-client.crt: {Name:mkde3e2007c3238f79e29faf394af2a01cf02535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 00:12:05.927145 1250292 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/proxy-client.key ...
I0512 00:12:05.927163 1250292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/proxy-client.key: {Name:mkce992cf25606f6108327c6550dcb2fc6f17e8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 00:12:05.927396 1250292 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/1124136.pem (1338 bytes)
W0512 00:12:05.927456 1250292 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/1124136_empty.pem, impossibly tiny 0 bytes
I0512 00:12:05.927479 1250292 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem (1679 bytes)
I0512 00:12:05.927520 1250292 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem (1082 bytes)
I0512 00:12:05.927560 1250292 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem (1123 bytes)
I0512 00:12:05.927596 1250292 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem (1679 bytes)
I0512 00:12:05.927662 1250292 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/11241362.pem (1708 bytes)
I0512 00:12:05.928364 1250292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0512 00:12:05.946960 1250292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0512 00:12:05.964798 1250292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0512 00:12:05.983160 1250292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0512 00:12:06.000688 1250292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0512 00:12:06.019024 1250292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0512 00:12:06.037523 1250292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0512 00:12:06.055782 1250292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0512 00:12:06.074993 1250292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0512 00:12:06.094235 1250292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/1124136.pem --> /usr/share/ca-certificates/1124136.pem (1338 bytes)
I0512 00:12:06.112223 1250292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/11241362.pem --> /usr/share/ca-certificates/11241362.pem (1708 bytes)
I0512 00:12:06.131224 1250292 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0512 00:12:06.144993 1250292 ssh_runner.go:195] Run: openssl version
I0512 00:12:06.150151 1250292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0512 00:12:06.159406 1250292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0512 00:12:06.162984 1250292 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 11 22:53 /usr/share/ca-certificates/minikubeCA.pem
I0512 00:12:06.163043 1250292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0512 00:12:06.168017 1250292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0512 00:12:06.176314 1250292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1124136.pem && ln -fs /usr/share/ca-certificates/1124136.pem /etc/ssl/certs/1124136.pem"
I0512 00:12:06.185283 1250292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1124136.pem
I0512 00:12:06.188728 1250292 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 11 22:59 /usr/share/ca-certificates/1124136.pem
I0512 00:12:06.188777 1250292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1124136.pem
I0512 00:12:06.194220 1250292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1124136.pem /etc/ssl/certs/51391683.0"
I0512 00:12:06.202597 1250292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11241362.pem && ln -fs /usr/share/ca-certificates/11241362.pem /etc/ssl/certs/11241362.pem"
I0512 00:12:06.210952 1250292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11241362.pem
I0512 00:12:06.214228 1250292 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 11 22:59 /usr/share/ca-certificates/11241362.pem
I0512 00:12:06.214280 1250292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11241362.pem
I0512 00:12:06.219678 1250292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11241362.pem /etc/ssl/certs/3ec20f2e.0"
I0512 00:12:06.229000 1250292 kubeadm.go:391] StartCluster: {Name:offline-containerd-20220512000808-1124136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:offline-containerd-20220512000808-1124136 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0512 00:12:06.229129 1250292 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0512 00:12:06.229172 1250292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0512 00:12:06.258354 1250292 cri.go:87] found id: ""
I0512 00:12:06.258425 1250292 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0512 00:12:06.267505 1250292 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0512 00:12:06.275858 1250292 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0512 00:12:06.275923 1250292 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0512 00:12:06.285516 1250292 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0512 00:12:06.285575 1250292 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0512 00:12:16.838521 1250292 out.go:204] - Generating certificates and keys ...
I0512 00:12:16.841577 1250292 out.go:204] - Booting up control plane ...
I0512 00:12:16.844422 1250292 out.go:204] - Configuring RBAC rules ...
I0512 00:12:16.846690 1250292 cni.go:95] Creating CNI manager for ""
I0512 00:12:16.846719 1250292 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0512 00:12:16.848464 1250292 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0512 00:12:16.849989 1250292 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0512 00:12:16.854651 1250292 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
I0512 00:12:16.854673 1250292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0512 00:12:16.889832 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0512 00:12:18.107661 1250292 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.217772071s)
I0512 00:12:18.107729 1250292 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0512 00:12:18.107875 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:18.107959 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0 minikube.k8s.io/name=offline-containerd-20220512000808-1124136 minikube.k8s.io/updated_at=2022_05_12T00_12_18_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:18.214342 1250292 ops.go:34] apiserver oom_adj: -16
I0512 00:12:18.214433 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:18.793318 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:19.293825 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:19.794066 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:20.293855 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:20.793900 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:21.293920 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:21.793475 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:22.293942 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:22.793313 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:23.293299 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:23.794002 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:24.293817 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:24.794182 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:25.293890 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:25.793891 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:26.293849 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:26.793423 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:27.294194 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:27.793228 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:28.293689 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:28.793785 1250292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:12:28.873219 1250292 kubeadm.go:1020] duration metric: took 10.765380079s to wait for elevateKubeSystemPrivileges.
I0512 00:12:28.873254 1250292 kubeadm.go:393] StartCluster complete in 22.644275369s
I0512 00:12:28.873277 1250292 settings.go:142] acquiring lock: {Name:mk876d4cb481a2d44e25e5a696fd3049db2d03e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 00:12:28.873405 1250292 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
I0512 00:12:28.875401 1250292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig: {Name:mk46c0a279b105d79911c4caf4e55f91113fa375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 00:12:28.876927 1250292 kapi.go:59] client config for offline-containerd-20220512000808-1124136: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb305014
8/.minikube/profiles/offline-containerd-20220512000808-1124136/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1701900), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0512 00:12:29.392500 1250292 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "offline-containerd-20220512000808-1124136" rescaled to 1
I0512 00:12:29.392580 1250292 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0512 00:12:29.392621 1250292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0512 00:12:29.394480 1250292 out.go:177] * Verifying Kubernetes components...
I0512 00:12:29.392620 1250292 addons.go:415] enableAddons start: toEnable=map[], additional=[]
I0512 00:12:29.392843 1250292 config.go:178] Loaded profile config "offline-containerd-20220512000808-1124136": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
I0512 00:12:29.396087 1250292 ssh_runner.go:195] Run: sudo service kubelet status
I0512 00:12:29.396117 1250292 addons.go:65] Setting default-storageclass=true in profile "offline-containerd-20220512000808-1124136"
I0512 00:12:29.396149 1250292 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "offline-containerd-20220512000808-1124136"
I0512 00:12:29.396117 1250292 addons.go:65] Setting storage-provisioner=true in profile "offline-containerd-20220512000808-1124136"
I0512 00:12:29.396254 1250292 addons.go:153] Setting addon storage-provisioner=true in "offline-containerd-20220512000808-1124136"
W0512 00:12:29.396268 1250292 addons.go:165] addon storage-provisioner should already be in state true
I0512 00:12:29.396312 1250292 host.go:66] Checking if "offline-containerd-20220512000808-1124136" exists ...
I0512 00:12:29.396577 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:12:29.396804 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:12:29.454371 1250292 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0512 00:12:29.455436 1250292 kapi.go:59] client config for offline-containerd-20220512000808-1124136: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb305014
8/.minikube/profiles/offline-containerd-20220512000808-1124136/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1701900), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0512 00:12:29.455857 1250292 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0512 00:12:29.455881 1250292 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0512 00:12:29.455941 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
I0512 00:12:29.461266 1250292 addons.go:153] Setting addon default-storageclass=true in "offline-containerd-20220512000808-1124136"
W0512 00:12:29.461306 1250292 addons.go:165] addon default-storageclass should already be in state true
I0512 00:12:29.461344 1250292 host.go:66] Checking if "offline-containerd-20220512000808-1124136" exists ...
I0512 00:12:29.461959 1250292 cli_runner.go:164] Run: docker container inspect offline-containerd-20220512000808-1124136 --format={{.State.Status}}
I0512 00:12:29.495752 1250292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.58.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0512 00:12:29.497490 1250292 kapi.go:59] client config for offline-containerd-20220512000808-1124136: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/offline-containerd-20220512000808-1124136/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb305014
8/.minikube/profiles/offline-containerd-20220512000808-1124136/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1701900), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0512 00:12:29.497801 1250292 node_ready.go:35] waiting up to 6m0s for node "offline-containerd-20220512000808-1124136" to be "Ready" ...
I0512 00:12:29.513307 1250292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50156 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/offline-containerd-20220512000808-1124136/id_rsa Username:docker}
I0512 00:12:29.526698 1250292 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
I0512 00:12:29.526732 1250292 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0512 00:12:29.526785 1250292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-containerd-20220512000808-1124136
I0512 00:12:29.584403 1250292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50156 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/offline-containerd-20220512000808-1124136/id_rsa Username:docker}
I0512 00:12:29.688270 1250292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0512 00:12:29.777588 1250292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0512 00:12:29.896234 1250292 start.go:815] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
I0512 00:12:30.324070 1250292 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0512 00:12:30.325611 1250292 addons.go:417] enableAddons completed in 932.993586ms
I0512 00:12:31.508185 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:12:33.508957 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:12:35.657420 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:12:38.008789 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:12:40.508887 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:12:42.509603 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:12:45.008328 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:12:47.508374 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:12:49.508880 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:12:52.008937 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:12:54.508850 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:12:57.008583 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:12:59.508095 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:01.508988 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:04.008353 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:06.008630 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:08.009175 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:10.508816 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:13.008568 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:15.508824 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:17.508885 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:19.509102 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:22.007927 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:24.008257 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:26.008471 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:28.009313 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:30.508872 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:33.008744 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:35.508900 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:38.008583 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:40.008831 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:42.508889 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:44.509315 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:46.509815 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:49.009285 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:51.508620 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:54.009120 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:56.009257 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:13:58.509881 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:00.512868 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:03.008059 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:05.508546 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:07.509000 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:10.008520 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:12.509090 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:15.199904 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:17.508795 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:19.508970 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:21.509156 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:24.008513 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:26.008771 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:28.509044 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:31.008150 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:33.008889 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:35.508657 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:37.509450 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:40.008602 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:42.008651 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:44.009161 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:46.508838 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:49.007883 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:51.008944 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:53.508609 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:55.509064 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:14:58.008998 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:00.509080 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:03.008138 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:05.008250 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:07.008631 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:09.508662 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:12.008909 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:14.508117 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:16.508597 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:18.508818 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:21.009033 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:23.010867 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:25.507832 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:27.509771 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:30.008418 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:32.008962 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:34.509078 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:36.509412 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:38.509724 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:40.543088 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:43.011112 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:45.508236 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:47.509123 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:50.008896 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:52.508705 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:54.509274 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:57.008352 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:15:59.508227 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:02.007766 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:04.509010 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:06.509305 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:09.008147 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:11.008876 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:13.508755 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:15.550771 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:18.008734 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:20.508954 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:22.509063 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:25.008458 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:27.509358 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:29.509718 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:29.512134 1250292 node_ready.go:38] duration metric: took 4m0.014310089s waiting for node "offline-containerd-20220512000808-1124136" to be "Ready" ...
I0512 00:16:29.513976 1250292 out.go:177]
W0512 00:16:29.515454 1250292 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
W0512 00:16:29.515471 1250292 out.go:239] *
*
W0512 00:16:29.516253 1250292 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0512 00:16:29.518102 1250292 out.go:177]
** /stderr **
aab_offline_test.go:58: out/minikube-linux-amd64 start -p offline-containerd-20220512000808-1124136 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker --container-runtime=containerd failed: exit status 80
panic.go:482: *** TestOffline FAILED at 2022-05-12 00:16:29.562874128 +0000 UTC m=+5051.186702888
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect offline-containerd-20220512000808-1124136
helpers_test.go:235: (dbg) docker inspect offline-containerd-20220512000808-1124136:
-- stdout --
[
{
"Id": "1a4f162f5af5f7d2713faa7292f513f52d7274361d63e3e489a956dc56da05cd",
"Created": "2022-05-12T00:11:59.687088835Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 1269883,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-05-12T00:12:00.220269142Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:8a42e1145657f551cd435eddb43b96ab44d0facbe44106da934225366eeb7757",
"ResolvConfPath": "/var/lib/docker/containers/1a4f162f5af5f7d2713faa7292f513f52d7274361d63e3e489a956dc56da05cd/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/1a4f162f5af5f7d2713faa7292f513f52d7274361d63e3e489a956dc56da05cd/hostname",
"HostsPath": "/var/lib/docker/containers/1a4f162f5af5f7d2713faa7292f513f52d7274361d63e3e489a956dc56da05cd/hosts",
"LogPath": "/var/lib/docker/containers/1a4f162f5af5f7d2713faa7292f513f52d7274361d63e3e489a956dc56da05cd/1a4f162f5af5f7d2713faa7292f513f52d7274361d63e3e489a956dc56da05cd-json.log",
"Name": "/offline-containerd-20220512000808-1124136",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"offline-containerd-20220512000808-1124136:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "offline-containerd-20220512000808-1124136",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 2147483648,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4294967296,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/3a51dda09cf65180ddb24b69b17dc285f4410a11db0b0ba7cb4aaec729d4c348-init/diff:/var/lib/docker/overlay2/6ada1a3e333093c0a78c285c5529934a642741adf58c30ef26b5511971232381/diff:/var/lib/docker/overlay2/903aa1d07fb380966f9b7500a23eb0998175947d1afc72a340bbc3f11eef81f6/diff:/var/lib/docker/overlay2/5c43c7f7c2e0cabb9dfa8a9d8ef673378ed83f6f29ad95ffe62e7e885f4e9169/diff:/var/lib/docker/overlay2/ecdce4cff3047380d392bb033443db70836cf1eb65727be860b8970ecf24acba/diff:/var/lib/docker/overlay2/c7b911771305e3cc085f14474c93bda04ba09f5438dd420c7ef4c99e74596644/diff:/var/lib/docker/overlay2/c9276f7985096a6f03b4b6f5cc99aab1ead585b51f3937b4767ea45fc55e4c07/diff:/var/lib/docker/overlay2/b7f627f7b92301015a20a518da5b02c7acd4d9e1fca789bdabbcff123e62315e/diff:/var/lib/docker/overlay2/8c2793f9fc24184b295fc2361636dba8171053746212efb4c0a3cd2b77706ecb/diff:/var/lib/docker/overlay2/841ef2027c01b763ba6fe76e1d29a683f699ab80b08c68ab1b1fc7d7fccde0c0/diff:/var/lib/docker/overlay2/3501f5
25f8033e2a0587438badd8a144345e966b2dc11610e101ffac63c1821a/diff:/var/lib/docker/overlay2/5b1d4e8fcafdecb6a16db928302fac256a89f24b8a84073d338013951e38f05a/diff:/var/lib/docker/overlay2/16e3fb6b357ff8f3cb31b35f42304ebdca4e57dd914af42cf9bee22a7474f110/diff:/var/lib/docker/overlay2/4f349219d2dbc382f5f364dddac59d1c681d17a67dbbcb66a23f8a934fa7cf32/diff:/var/lib/docker/overlay2/1e87fedc8ed0f0b4c644804208c450b54e0e7ae776a668c97e8d05ec730b9852/diff:/var/lib/docker/overlay2/3fc0bb795aee168041eee08c8ef2a069f29ded9016994b27d2430d19dbdf7879/diff:/var/lib/docker/overlay2/88fbfc61748460ff29d1a03795d2b92de3d36f9e136fef20afd466b53df71297/diff:/var/lib/docker/overlay2/6f92084d550bbc0757f09b989fe92e9baa6205efeccd3711a9b3bdd041745f62/diff:/var/lib/docker/overlay2/e28b073369009049b458d492750530c1b2923196371d6663da1996a046257746/diff:/var/lib/docker/overlay2/91c65ca1d519a69fdef64ed2b1622a77eec20a77d0673b55439730000043d858/diff:/var/lib/docker/overlay2/205daa544e0ad824eb7b1d871f57904940d8ac389a8a43293d6ef583d18e1290/diff:/var/lib/d
ocker/overlay2/778dfdbbd3ff4d9ff6d556f19589fe43b5cf8188c84c3d48ccbff40315c6f364/diff:/var/lib/docker/overlay2/17e2e137d217c55d6b0a430d62c6cae3be9fd113a7451386c821e1dc1a952c50/diff:/var/lib/docker/overlay2/87bd0c79641545186a3a0293bfe938d37e484a62c2e2162b6afac4ff27780dbf/diff:/var/lib/docker/overlay2/ae67058af04b9e8de1302d973ea8ab3cf26edbcfbbb58bedef000882616a742f/diff:/var/lib/docker/overlay2/fbae6e4b382e9357b0f7b64a53beae46c119ac3e66f489532987988dccdbf004/diff:/var/lib/docker/overlay2/8aa2bff4a8e7772885d9f9d6649c2c0391507f6190623954b482cff365ec1424/diff:/var/lib/docker/overlay2/dafdc68afa7e852b5710797ef544399fbdd2a25c1ae50b7b244218dd21dba407/diff:/var/lib/docker/overlay2/265eb9a4b7fbc698fec1b5c5a70f4ec6492c84cda4d9b97c4ff4ca4382652d18/diff:/var/lib/docker/overlay2/ed78a68129ef2911f51fec187ce342755fbdf43cc65cda4a8187f85f4abf2ed3/diff:/var/lib/docker/overlay2/3b76508ff3c68d01251e86227362153453ca1200ed62a957bad3f95e9f6796bc/diff:/var/lib/docker/overlay2/b94125060f809c0ba01ff2b821bad122639b39831450b48524ba9670c07
7435d/diff:/var/lib/docker/overlay2/3d9a5328ed41d1f6b5e0edbc31e6c6dc85b5c825cfabada68d92b4938baaa148/diff:/var/lib/docker/overlay2/e04ec81cde46f35183445cb5015442180971c58144b2d06c0b6e345295239d31/diff:/var/lib/docker/overlay2/6adee0ad26dbdacb24fa09a6d3279d53688f81e82fca8abc8bc7e12333d396f2/diff:/var/lib/docker/overlay2/4727242df701412869ea86d4e3fa740a93beb6a91a0241eccb8016a9c1f669f4/diff:/var/lib/docker/overlay2/5a6f9d928297707f2e4f50799d55108b50dc17b3ff4bf9767d5b83387031ab41/diff:/var/lib/docker/overlay2/49e21881f8909d54911a32e3e3388f82b35d1f55fd4cd433e4d71592bed404da/diff:/var/lib/docker/overlay2/d1e0c861f021d8d2f043c2985df14ecbf2e6a6d8f81142cdd20a7d6623199a9c/diff:/var/lib/docker/overlay2/76cc491fcea778ecde5aef07e921abcc860b2a5383790c60bd61524909f2c0a7/diff:/var/lib/docker/overlay2/fcf7ec32919b8a25210a433a5bee3ee5d95669c0720281959694ff3473aaaa20/diff:/var/lib/docker/overlay2/a9caa658e53d98c03c95997ba73d450db6a796056c2b9332db8e41cf76988367/diff:/var/lib/docker/overlay2/fbf35d040d2e1c80cf0d56ccbef5e68bd14540
9914f50746a1fa62ec348d6f53/diff",
"MergedDir": "/var/lib/docker/overlay2/3a51dda09cf65180ddb24b69b17dc285f4410a11db0b0ba7cb4aaec729d4c348/merged",
"UpperDir": "/var/lib/docker/overlay2/3a51dda09cf65180ddb24b69b17dc285f4410a11db0b0ba7cb4aaec729d4c348/diff",
"WorkDir": "/var/lib/docker/overlay2/3a51dda09cf65180ddb24b69b17dc285f4410a11db0b0ba7cb4aaec729d4c348/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "offline-containerd-20220512000808-1124136",
"Source": "/var/lib/docker/volumes/offline-containerd-20220512000808-1124136/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "offline-containerd-20220512000808-1124136",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "offline-containerd-20220512000808-1124136",
"name.minikube.sigs.k8s.io": "offline-containerd-20220512000808-1124136",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "fe7d4507330be2ee15c9a6d842d5b06c4430e4eaf146c2ac11ee5aa7c433fe9d",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "50156"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "50154"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "50151"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "50153"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "50152"
}
]
},
"SandboxKey": "/var/run/docker/netns/fe7d4507330b",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"offline-containerd-20220512000808-1124136": {
"IPAMConfig": {
"IPv4Address": "192.168.58.2"
},
"Links": null,
"Aliases": [
"1a4f162f5af5",
"offline-containerd-20220512000808-1124136"
],
"NetworkID": "4a4e1e40f8ea4cfa72881094b1d0d2895be7e56164b68ee642be90bb61cc87a9",
"EndpointID": "db375d1c1ebac250c371bfd5febd2618121476511c97c6b7f8d5f3b3b5c6919f",
"Gateway": "192.168.58.1",
"IPAddress": "192.168.58.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:3a:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p offline-containerd-20220512000808-1124136 -n offline-containerd-20220512000808-1124136
helpers_test.go:244: <<< TestOffline FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestOffline]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p offline-containerd-20220512000808-1124136 logs -n 25
helpers_test.go:252: TestOffline logs:
-- stdout --
*
* ==> Audit <==
* |---------|-------------------------------------------|-------------------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-------------------------------------------|-------------------------------------------|---------|---------|---------------------|---------------------|
| start | -p | NoKubernetes-20220512000808-1124136 | jenkins | v1.25.2 | 12 May 22 00:12 UTC | 12 May 22 00:12 UTC |
| | NoKubernetes-20220512000808-1124136 | | | | | |
| | --no-kubernetes --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | NoKubernetes-20220512000808-1124136 | jenkins | v1.25.2 | 12 May 22 00:12 UTC | 12 May 22 00:12 UTC |
| | NoKubernetes-20220512000808-1124136 | | | | | |
| start | -p | missing-upgrade-20220512001037-1124136 | jenkins | v1.25.2 | 12 May 22 00:11 UTC | 12 May 22 00:12 UTC |
| | missing-upgrade-20220512001037-1124136 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p | NoKubernetes-20220512000808-1124136 | jenkins | v1.25.2 | 12 May 22 00:12 UTC | 12 May 22 00:12 UTC |
| | NoKubernetes-20220512000808-1124136 | | | | | |
| | --no-kubernetes --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | missing-upgrade-20220512001037-1124136 | jenkins | v1.25.2 | 12 May 22 00:12 UTC | 12 May 22 00:12 UTC |
| | missing-upgrade-20220512001037-1124136 | | | | | |
| profile | list | minikube | jenkins | v1.25.2 | 12 May 22 00:12 UTC | 12 May 22 00:12 UTC |
| profile | list --output=json | minikube | jenkins | v1.25.2 | 12 May 22 00:12 UTC | 12 May 22 00:12 UTC |
| stop | -p | NoKubernetes-20220512000808-1124136 | jenkins | v1.25.2 | 12 May 22 00:12 UTC | 12 May 22 00:12 UTC |
| | NoKubernetes-20220512000808-1124136 | | | | | |
| start | -p | NoKubernetes-20220512000808-1124136 | jenkins | v1.25.2 | 12 May 22 00:12 UTC | 12 May 22 00:12 UTC |
| | NoKubernetes-20220512000808-1124136 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | NoKubernetes-20220512000808-1124136 | jenkins | v1.25.2 | 12 May 22 00:12 UTC | 12 May 22 00:12 UTC |
| | NoKubernetes-20220512000808-1124136 | | | | | |
| start | -p | kubernetes-upgrade-20220512001243-1124136 | jenkins | v1.25.2 | 12 May 22 00:12 UTC | 12 May 22 00:13 UTC |
| | kubernetes-upgrade-20220512001243-1124136 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --alsologtostderr -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| stop | -p | kubernetes-upgrade-20220512001243-1124136 | jenkins | v1.25.2 | 12 May 22 00:13 UTC | 12 May 22 00:13 UTC |
| | kubernetes-upgrade-20220512001243-1124136 | | | | | |
| start | -p | running-upgrade-20220512001254-1124136 | jenkins | v1.25.2 | 12 May 22 00:13 UTC | 12 May 22 00:14 UTC |
| | running-upgrade-20220512001254-1124136 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | running-upgrade-20220512001254-1124136 | jenkins | v1.25.2 | 12 May 22 00:14 UTC | 12 May 22 00:14 UTC |
| | running-upgrade-20220512001254-1124136 | | | | | |
| start | -p | kubernetes-upgrade-20220512001243-1124136 | jenkins | v1.25.2 | 12 May 22 00:13 UTC | 12 May 22 00:14 UTC |
| | kubernetes-upgrade-20220512001243-1124136 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.23.6-rc.0 | | | | | |
| | --alsologtostderr -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p | kubernetes-upgrade-20220512001243-1124136 | jenkins | v1.25.2 | 12 May 22 00:14 UTC | 12 May 22 00:14 UTC |
| | kubernetes-upgrade-20220512001243-1124136 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.23.6-rc.0 | | | | | |
| | --alsologtostderr -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | kubernetes-upgrade-20220512001243-1124136 | jenkins | v1.25.2 | 12 May 22 00:14 UTC | 12 May 22 00:14 UTC |
| | kubernetes-upgrade-20220512001243-1124136 | | | | | |
| start | -p | force-systemd-flag-20220512001458-1124136 | jenkins | v1.25.2 | 12 May 22 00:14 UTC | 12 May 22 00:15 UTC |
| | force-systemd-flag-20220512001458-1124136 | | | | | |
| | --memory=2048 --force-systemd | | | | | |
| | --alsologtostderr -v=5 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-flag-20220512001458-1124136 | force-systemd-flag-20220512001458-1124136 | jenkins | v1.25.2 | 12 May 22 00:15 UTC | 12 May 22 00:15 UTC |
| | ssh cat /etc/containerd/config.toml | | | | | |
| delete | -p | force-systemd-flag-20220512001458-1124136 | jenkins | v1.25.2 | 12 May 22 00:15 UTC | 12 May 22 00:15 UTC |
| | force-systemd-flag-20220512001458-1124136 | | | | | |
| start | -p | cert-expiration-20220512000808-1124136 | jenkins | v1.25.2 | 12 May 22 00:15 UTC | 12 May 22 00:15 UTC |
| | cert-expiration-20220512000808-1124136 | | | | | |
| | --memory=2048 --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | cert-expiration-20220512000808-1124136 | jenkins | v1.25.2 | 12 May 22 00:15 UTC | 12 May 22 00:15 UTC |
| | cert-expiration-20220512000808-1124136 | | | | | |
| start | -p | force-systemd-env-20220512001534-1124136 | jenkins | v1.25.2 | 12 May 22 00:15 UTC | 12 May 22 00:16 UTC |
| | force-systemd-env-20220512001534-1124136 | | | | | |
| | --memory=2048 --alsologtostderr | | | | | |
| | -v=5 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-env-20220512001534-1124136 | force-systemd-env-20220512001534-1124136 | jenkins | v1.25.2 | 12 May 22 00:16 UTC | 12 May 22 00:16 UTC |
| | ssh cat /etc/containerd/config.toml | | | | | |
| delete | -p | force-systemd-env-20220512001534-1124136 | jenkins | v1.25.2 | 12 May 22 00:16 UTC | 12 May 22 00:16 UTC |
| | force-systemd-env-20220512001534-1124136 | | | | | |
|---------|-------------------------------------------|-------------------------------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/05/12 00:16:09
Running on machine: ubuntu-20-agent-10
Binary: Built with gc go1.18.1 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0512 00:16:09.514944 1309329 out.go:296] Setting OutFile to fd 1 ...
I0512 00:16:09.515148 1309329 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0512 00:16:09.515159 1309329 out.go:309] Setting ErrFile to fd 2...
I0512 00:16:09.515166 1309329 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0512 00:16:09.515284 1309329 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/bin
I0512 00:16:09.515581 1309329 out.go:303] Setting JSON to false
I0512 00:16:09.517035 1309329 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":46724,"bootTime":1652267846,"procs":525,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0512 00:16:09.517143 1309329 start.go:125] virtualization: kvm guest
I0512 00:16:09.520072 1309329 out.go:177] * [custom-weave-20220512000810-1124136] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
I0512 00:16:09.521658 1309329 notify.go:193] Checking for updates...
I0512 00:16:09.521673 1309329 out.go:177] - MINIKUBE_LOCATION=13639
I0512 00:16:09.523286 1309329 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0512 00:16:09.524896 1309329 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/kubeconfig
I0512 00:16:09.526447 1309329 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube
I0512 00:16:09.527864 1309329 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0512 00:16:09.529676 1309329 config.go:178] Loaded profile config "auto-20220512000808-1124136": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
I0512 00:16:09.529807 1309329 config.go:178] Loaded profile config "offline-containerd-20220512000808-1124136": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
I0512 00:16:09.529904 1309329 config.go:178] Loaded profile config "pause-20220512001407-1124136": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
I0512 00:16:09.529970 1309329 driver.go:358] Setting default libvirt URI to qemu:///system
I0512 00:16:09.571216 1309329 docker.go:137] docker version: linux-20.10.15
I0512 00:16:09.571330 1309329 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0512 00:16:09.678570 1309329 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:70 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-12 00:16:09.601705679 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0512 00:16:09.678675 1309329 docker.go:254] overlay module found
I0512 00:16:09.680778 1309329 out.go:177] * Using the docker driver based on user configuration
I0512 00:16:09.682168 1309329 start.go:284] selected driver: docker
I0512 00:16:09.682187 1309329 start.go:801] validating driver "docker" against <nil>
I0512 00:16:09.682207 1309329 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0512 00:16:09.683157 1309329 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0512 00:16:09.787826 1309329 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:70 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:49 SystemTime:2022-05-12 00:16:09.712816633 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1025-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0512 00:16:09.787955 1309329 start_flags.go:292] no existing cluster config was found, will generate one from the flags
I0512 00:16:09.788139 1309329 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0512 00:16:09.790282 1309329 out.go:177] * Using Docker driver with the root privilege
I0512 00:16:09.791726 1309329 cni.go:95] Creating CNI manager for "testdata/weavenet.yaml"
I0512 00:16:09.791765 1309329 start_flags.go:301] Found "testdata/weavenet.yaml" CNI - setting NetworkPlugin=cni
I0512 00:16:09.791778 1309329 start_flags.go:306] config:
{Name:custom-weave-20220512000810-1124136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220512000810-1124136 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0512 00:16:09.793482 1309329 out.go:177] * Starting control plane node custom-weave-20220512000810-1124136 in cluster custom-weave-20220512000810-1124136
I0512 00:16:09.794888 1309329 cache.go:120] Beginning downloading kic base image for docker with containerd
I0512 00:16:09.796433 1309329 out.go:177] * Pulling base image ...
I0512 00:16:09.797833 1309329 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
I0512 00:16:09.797871 1309329 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4
I0512 00:16:09.797886 1309329 cache.go:57] Caching tarball of preloaded images
I0512 00:16:09.797927 1309329 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
I0512 00:16:09.798111 1309329 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0512 00:16:09.798129 1309329 cache.go:60] Finished verifying existence of preloaded tar for v1.23.5 on containerd
I0512 00:16:09.798222 1309329 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/config.json ...
I0512 00:16:09.798243 1309329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/config.json: {Name:mk15638f22490820618773eb9e23a68faf3fd1f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 00:16:09.841118 1309329 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
I0512 00:16:09.841155 1309329 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
I0512 00:16:09.841177 1309329 cache.go:206] Successfully downloaded all kic artifacts
I0512 00:16:09.841221 1309329 start.go:352] acquiring machines lock for custom-weave-20220512000810-1124136: {Name:mk0857a532ebe305ef50d147f2917156151d932d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0512 00:16:09.841477 1309329 start.go:356] acquired machines lock for "custom-weave-20220512000810-1124136" in 220.931µs
I0512 00:16:09.841546 1309329 start.go:91] Provisioning new machine with config: &{Name:custom-weave-20220512000810-1124136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220512000810-1124136
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0512 00:16:09.841656 1309329 start.go:131] createHost starting for "" (driver="docker")
I0512 00:16:08.542560 1294214 node_ready.go:58] node "pause-20220512001407-1124136" has status "Ready":"False"
I0512 00:16:11.041555 1294214 node_ready.go:58] node "pause-20220512001407-1124136" has status "Ready":"False"
I0512 00:16:11.008876 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:13.508755 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:09.844074 1309329 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
I0512 00:16:09.844307 1309329 start.go:165] libmachine.API.Create for "custom-weave-20220512000810-1124136" (driver="docker")
I0512 00:16:09.844339 1309329 client.go:168] LocalClient.Create starting
I0512 00:16:09.844414 1309329 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem
I0512 00:16:09.844444 1309329 main.go:134] libmachine: Decoding PEM data...
I0512 00:16:09.844478 1309329 main.go:134] libmachine: Parsing certificate...
I0512 00:16:09.844554 1309329 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem
I0512 00:16:09.844572 1309329 main.go:134] libmachine: Decoding PEM data...
I0512 00:16:09.844585 1309329 main.go:134] libmachine: Parsing certificate...
I0512 00:16:09.844929 1309329 cli_runner.go:164] Run: docker network inspect custom-weave-20220512000810-1124136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0512 00:16:09.876039 1309329 cli_runner.go:211] docker network inspect custom-weave-20220512000810-1124136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0512 00:16:09.876113 1309329 network_create.go:272] running [docker network inspect custom-weave-20220512000810-1124136] to gather additional debugging logs...
I0512 00:16:09.876138 1309329 cli_runner.go:164] Run: docker network inspect custom-weave-20220512000810-1124136
W0512 00:16:09.908444 1309329 cli_runner.go:211] docker network inspect custom-weave-20220512000810-1124136 returned with exit code 1
I0512 00:16:09.908480 1309329 network_create.go:275] error running [docker network inspect custom-weave-20220512000810-1124136]: docker network inspect custom-weave-20220512000810-1124136: exit status 1
stdout:
[]
stderr:
Error: No such network: custom-weave-20220512000810-1124136
I0512 00:16:09.908502 1309329 network_create.go:277] output of [docker network inspect custom-weave-20220512000810-1124136]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: custom-weave-20220512000810-1124136
** /stderr **
I0512 00:16:09.908551 1309329 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0512 00:16:09.940817 1309329 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-155038d5bc22 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:93:7b:1e:12}}
I0512 00:16:09.941468 1309329 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-4a4e1e40f8ea IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:39:28:a8:75}}
I0512 00:16:09.942079 1309329 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-7443e3bc69e9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:46:75:1c:dd}}
I0512 00:16:09.942925 1309329 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc000a5a410] misses:0}
I0512 00:16:09.942961 1309329 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0512 00:16:09.942973 1309329 network_create.go:115] attempt to create docker network custom-weave-20220512000810-1124136 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I0512 00:16:09.943018 1309329 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220512000810-1124136
I0512 00:16:10.013166 1309329 network_create.go:99] docker network custom-weave-20220512000810-1124136 192.168.76.0/24 created
I0512 00:16:10.013200 1309329 kic.go:106] calculated static IP "192.168.76.2" for the "custom-weave-20220512000810-1124136" container
I0512 00:16:10.013267 1309329 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0512 00:16:10.047302 1309329 cli_runner.go:164] Run: docker volume create custom-weave-20220512000810-1124136 --label name.minikube.sigs.k8s.io=custom-weave-20220512000810-1124136 --label created_by.minikube.sigs.k8s.io=true
I0512 00:16:10.080533 1309329 oci.go:103] Successfully created a docker volume custom-weave-20220512000810-1124136
I0512 00:16:10.080626 1309329 cli_runner.go:164] Run: docker run --rm --name custom-weave-20220512000810-1124136-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220512000810-1124136 --entrypoint /usr/bin/test -v custom-weave-20220512000810-1124136:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
I0512 00:16:10.633205 1309329 oci.go:107] Successfully prepared a docker volume custom-weave-20220512000810-1124136
I0512 00:16:10.633276 1309329 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
I0512 00:16:10.633306 1309329 kic.go:179] Starting extracting preloaded images to volume ...
I0512 00:16:10.633414 1309329 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220512000810-1124136:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
I0512 00:16:13.042450 1294214 node_ready.go:58] node "pause-20220512001407-1124136" has status "Ready":"False"
I0512 00:16:15.541712 1294214 node_ready.go:58] node "pause-20220512001407-1124136" has status "Ready":"False"
I0512 00:16:15.550771 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:18.008734 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:18.095465 1309329 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220512000810-1124136:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (7.461972884s)
I0512 00:16:18.095550 1309329 kic.go:188] duration metric: took 7.462241 seconds to extract preloaded images to volume
W0512 00:16:18.095721 1309329 cgroups_linux.go:88] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0512 00:16:18.095850 1309329 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0512 00:16:18.250792 1309329 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220512000810-1124136 --name custom-weave-20220512000810-1124136 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220512000810-1124136 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220512000810-1124136 --network custom-weave-20220512000810-1124136 --ip 192.168.76.2 --volume custom-weave-20220512000810-1124136:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
I0512 00:16:18.690289 1309329 cli_runner.go:164] Run: docker container inspect custom-weave-20220512000810-1124136 --format={{.State.Running}}
I0512 00:16:18.728890 1309329 cli_runner.go:164] Run: docker container inspect custom-weave-20220512000810-1124136 --format={{.State.Status}}
I0512 00:16:18.762137 1309329 cli_runner.go:164] Run: docker exec custom-weave-20220512000810-1124136 stat /var/lib/dpkg/alternatives/iptables
I0512 00:16:18.831744 1309329 oci.go:247] the created container "custom-weave-20220512000810-1124136" has a running status.
I0512 00:16:18.831785 1309329 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/custom-weave-20220512000810-1124136/id_rsa...
I0512 00:16:18.968232 1309329 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/custom-weave-20220512000810-1124136/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0512 00:16:19.073699 1309329 cli_runner.go:164] Run: docker container inspect custom-weave-20220512000810-1124136 --format={{.State.Status}}
I0512 00:16:19.120818 1309329 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0512 00:16:19.120844 1309329 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220512000810-1124136 chown docker:docker /home/docker/.ssh/authorized_keys]
I0512 00:16:19.230279 1309329 cli_runner.go:164] Run: docker container inspect custom-weave-20220512000810-1124136 --format={{.State.Status}}
I0512 00:16:19.267200 1309329 machine.go:88] provisioning docker machine ...
I0512 00:16:19.267251 1309329 ubuntu.go:169] provisioning hostname "custom-weave-20220512000810-1124136"
I0512 00:16:19.267325 1309329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512000810-1124136
I0512 00:16:19.311047 1309329 main.go:134] libmachine: Using SSH client type: native
I0512 00:16:19.311302 1309329 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil> [] 0s} 127.0.0.1 50219 <nil> <nil>}
I0512 00:16:19.311323 1309329 main.go:134] libmachine: About to run SSH command:
sudo hostname custom-weave-20220512000810-1124136 && echo "custom-weave-20220512000810-1124136" | sudo tee /etc/hostname
I0512 00:16:19.429803 1309329 main.go:134] libmachine: SSH cmd err, output: <nil>: custom-weave-20220512000810-1124136
I0512 00:16:19.429883 1309329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512000810-1124136
I0512 00:16:19.464476 1309329 main.go:134] libmachine: Using SSH client type: native
I0512 00:16:19.464719 1309329 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da160] 0x7dd1c0 <nil> [] 0s} 127.0.0.1 50219 <nil> <nil>}
I0512 00:16:19.464752 1309329 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\scustom-weave-20220512000810-1124136' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20220512000810-1124136/g' /etc/hosts;
else
echo '127.0.1.1 custom-weave-20220512000810-1124136' | sudo tee -a /etc/hosts;
fi
fi
I0512 00:16:19.585580 1309329 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0512 00:16:19.585632 1309329 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3
050148/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube}
I0512 00:16:19.587350 1309329 ubuntu.go:177] setting up certificates
I0512 00:16:19.587376 1309329 provision.go:83] configureAuth start
I0512 00:16:19.587561 1309329 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220512000810-1124136
I0512 00:16:19.623074 1309329 provision.go:138] copyHostCerts
I0512 00:16:19.623142 1309329 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem, removing ...
I0512 00:16:19.623155 1309329 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem
I0512 00:16:19.623222 1309329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/cert.pem (1123 bytes)
I0512 00:16:19.623313 1309329 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem, removing ...
I0512 00:16:19.623327 1309329 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem
I0512 00:16:19.623350 1309329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/key.pem (1679 bytes)
I0512 00:16:19.623413 1309329 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem, removing ...
I0512 00:16:19.623422 1309329 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem
I0512 00:16:19.623442 1309329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.pem (1082 bytes)
I0512 00:16:19.623495 1309329 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem org=jenkins.custom-weave-20220512000810-1124136 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube custom-weave-20220512000810-1124136]
I0512 00:16:19.852096 1309329 provision.go:172] copyRemoteCerts
I0512 00:16:19.852156 1309329 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0512 00:16:19.852190 1309329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512000810-1124136
I0512 00:16:19.891873 1309329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50219 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/custom-weave-20220512000810-1124136/id_rsa Username:docker}
I0512 00:16:19.977008 1309329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0512 00:16:19.998070 1309329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
I0512 00:16:20.017909 1309329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0512 00:16:20.035359 1309329 provision.go:86] duration metric: configureAuth took 447.849058ms
I0512 00:16:20.035387 1309329 ubuntu.go:193] setting minikube options for container-runtime
I0512 00:16:20.035552 1309329 config.go:178] Loaded profile config "custom-weave-20220512000810-1124136": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
I0512 00:16:20.035566 1309329 machine.go:91] provisioned docker machine in 768.338457ms
I0512 00:16:20.035573 1309329 client.go:171] LocalClient.Create took 10.191225753s
I0512 00:16:20.035600 1309329 start.go:173] duration metric: libmachine.API.Create for "custom-weave-20220512000810-1124136" took 10.191288225s
I0512 00:16:20.035619 1309329 start.go:306] post-start starting for "custom-weave-20220512000810-1124136" (driver="docker")
I0512 00:16:20.035627 1309329 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0512 00:16:20.035686 1309329 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0512 00:16:20.035734 1309329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512000810-1124136
I0512 00:16:20.073035 1309329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50219 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/custom-weave-20220512000810-1124136/id_rsa Username:docker}
I0512 00:16:20.157865 1309329 ssh_runner.go:195] Run: cat /etc/os-release
I0512 00:16:20.161024 1309329 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0512 00:16:20.161054 1309329 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0512 00:16:20.161077 1309329 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0512 00:16:20.161089 1309329 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0512 00:16:20.161107 1309329 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/addons for local assets ...
I0512 00:16:20.161162 1309329 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files for local assets ...
I0512 00:16:20.161245 1309329 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/11241362.pem -> 11241362.pem in /etc/ssl/certs
I0512 00:16:20.161347 1309329 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0512 00:16:20.168826 1309329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/11241362.pem --> /etc/ssl/certs/11241362.pem (1708 bytes)
I0512 00:16:20.186567 1309329 start.go:309] post-start completed in 150.930111ms
I0512 00:16:20.186911 1309329 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220512000810-1124136
I0512 00:16:20.219598 1309329 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/config.json ...
I0512 00:16:20.219865 1309329 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0512 00:16:20.219919 1309329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512000810-1124136
I0512 00:16:20.258801 1309329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50219 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/custom-weave-20220512000810-1124136/id_rsa Username:docker}
I0512 00:16:20.341496 1309329 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0512 00:16:20.345741 1309329 start.go:134] duration metric: createHost completed in 10.504069437s
I0512 00:16:20.345770 1309329 start.go:81] releasing machines lock for "custom-weave-20220512000810-1124136", held for 10.504251642s
I0512 00:16:20.345868 1309329 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220512000810-1124136
I0512 00:16:20.383894 1309329 ssh_runner.go:195] Run: systemctl --version
I0512 00:16:20.383945 1309329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512000810-1124136
I0512 00:16:20.383962 1309329 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0512 00:16:20.384030 1309329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512000810-1124136
I0512 00:16:20.417735 1309329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50219 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/custom-weave-20220512000810-1124136/id_rsa Username:docker}
I0512 00:16:20.420453 1309329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50219 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/machines/custom-weave-20220512000810-1124136/id_rsa Username:docker}
I0512 00:16:20.519042 1309329 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0512 00:16:20.529548 1309329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0512 00:16:20.538506 1309329 docker.go:187] disabling docker service ...
I0512 00:16:20.538572 1309329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0512 00:16:20.555621 1309329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0512 00:16:20.565325 1309329 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0512 00:16:20.668834 1309329 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0512 00:16:20.747777 1309329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0512 00:16:20.757489 1309329 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0512 00:16:20.770899 1309329 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0LmQiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
I0512 00:16:20.784524 1309329 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0512 00:16:20.791062 1309329 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0512 00:16:20.797600 1309329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0512 00:16:20.875121 1309329 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0512 00:16:20.941435 1309329 start.go:456] Will wait 60s for socket path /run/containerd/containerd.sock
I0512 00:16:20.941503 1309329 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0512 00:16:20.945200 1309329 start.go:477] Will wait 60s for crictl version
I0512 00:16:20.945257 1309329 ssh_runner.go:195] Run: sudo crictl version
I0512 00:16:20.972297 1309329 start.go:486] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.6.4
RuntimeApiVersion: v1alpha2
I0512 00:16:20.972366 1309329 ssh_runner.go:195] Run: containerd --version
I0512 00:16:21.000883 1309329 ssh_runner.go:195] Run: containerd --version
I0512 00:16:21.032939 1309329 out.go:177] * Preparing Kubernetes v1.23.5 on containerd 1.6.4 ...
I0512 00:16:18.041596 1294214 node_ready.go:58] node "pause-20220512001407-1124136" has status "Ready":"False"
I0512 00:16:20.541979 1294214 node_ready.go:58] node "pause-20220512001407-1124136" has status "Ready":"False"
I0512 00:16:20.508954 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:22.509063 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:24.431383 1305383 out.go:204] - Generating certificates and keys ...
I0512 00:16:24.434600 1305383 out.go:204] - Booting up control plane ...
I0512 00:16:24.437615 1305383 out.go:204] - Configuring RBAC rules ...
I0512 00:16:24.439823 1305383 cni.go:95] Creating CNI manager for ""
I0512 00:16:24.439841 1305383 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0512 00:16:24.441539 1305383 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0512 00:16:21.034556 1309329 cli_runner.go:164] Run: docker network inspect custom-weave-20220512000810-1124136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0512 00:16:21.069188 1309329 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0512 00:16:21.072709 1309329 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0512 00:16:21.082742 1309329 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
I0512 00:16:21.082816 1309329 ssh_runner.go:195] Run: sudo crictl images --output json
I0512 00:16:21.107072 1309329 containerd.go:607] all images are preloaded for containerd runtime.
I0512 00:16:21.107096 1309329 containerd.go:521] Images already preloaded, skipping extraction
I0512 00:16:21.107142 1309329 ssh_runner.go:195] Run: sudo crictl images --output json
I0512 00:16:21.131080 1309329 containerd.go:607] all images are preloaded for containerd runtime.
I0512 00:16:21.131108 1309329 cache_images.go:84] Images are preloaded, skipping loading
I0512 00:16:21.131181 1309329 ssh_runner.go:195] Run: sudo crictl info
I0512 00:16:21.155281 1309329 cni.go:95] Creating CNI manager for "testdata/weavenet.yaml"
I0512 00:16:21.155315 1309329 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0512 00:16:21.155329 1309329 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20220512000810-1124136 NodeName:custom-weave-20220512000810-1124136 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0512 00:16:21.155463 1309329 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "custom-weave-20220512000810-1124136"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.23.5
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0512 00:16:21.155558 1309329 kubeadm.go:936] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=custom-weave-20220512000810-1124136 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220512000810-1124136 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:}
I0512 00:16:21.155604 1309329 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
I0512 00:16:21.162924 1309329 binaries.go:44] Found k8s binaries, skipping transfer
I0512 00:16:21.162985 1309329 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0512 00:16:21.170000 1309329 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (549 bytes)
I0512 00:16:21.183013 1309329 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0512 00:16:21.196137 1309329 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
I0512 00:16:21.208952 1309329 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0512 00:16:21.211759 1309329 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0512 00:16:21.220881 1309329 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136 for IP: 192.168.76.2
I0512 00:16:21.220992 1309329 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.key
I0512 00:16:21.221041 1309329 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.key
I0512 00:16:21.221107 1309329 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/client.key
I0512 00:16:21.221123 1309329 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/client.crt with IP's: []
I0512 00:16:21.353020 1309329 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/client.crt ...
I0512 00:16:21.353058 1309329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/client.crt: {Name:mkddf2a8efed5ea136324934647e9be18aefaa6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 00:16:21.353277 1309329 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/client.key ...
I0512 00:16:21.353293 1309329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/client.key: {Name:mkbd816d3595672f94e8ede38e7e434e1d092171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 00:16:21.353411 1309329 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/apiserver.key.31bdca25
I0512 00:16:21.353436 1309329 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0512 00:16:21.743471 1309329 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/apiserver.crt.31bdca25 ...
I0512 00:16:21.743506 1309329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/apiserver.crt.31bdca25: {Name:mkdf3819855f21f8aa8c9f17c33e7f928954c40d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 00:16:21.743711 1309329 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/apiserver.key.31bdca25 ...
I0512 00:16:21.743728 1309329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/apiserver.key.31bdca25: {Name:mkcb28b7bbfb38d994fa760eb64d50997f375fcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 00:16:21.743813 1309329 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/apiserver.crt
I0512 00:16:21.743869 1309329 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/apiserver.key
I0512 00:16:21.743913 1309329 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/proxy-client.key
I0512 00:16:21.743928 1309329 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/proxy-client.crt with IP's: []
I0512 00:16:21.929103 1309329 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/proxy-client.crt ...
I0512 00:16:21.929146 1309329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/proxy-client.crt: {Name:mk5928a16a260a18e3067fae9e20a2e63c9cfe32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 00:16:21.929352 1309329 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/proxy-client.key ...
I0512 00:16:21.929366 1309329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/proxy-client.key: {Name:mk4bea8907073031bcf671531ee8fb695fb07a7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 00:16:21.929536 1309329 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/1124136.pem (1338 bytes)
W0512 00:16:21.929579 1309329 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/1124136_empty.pem, impossibly tiny 0 bytes
I0512 00:16:21.929594 1309329 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca-key.pem (1679 bytes)
I0512 00:16:21.929615 1309329 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/ca.pem (1082 bytes)
I0512 00:16:21.929639 1309329 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/cert.pem (1123 bytes)
I0512 00:16:21.929662 1309329 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/key.pem (1679 bytes)
I0512 00:16:21.929700 1309329 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/11241362.pem (1708 bytes)
I0512 00:16:21.930212 1309329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0512 00:16:21.948704 1309329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0512 00:16:21.965876 1309329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0512 00:16:21.982755 1309329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/profiles/custom-weave-20220512000810-1124136/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0512 00:16:22.000597 1309329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0512 00:16:22.017985 1309329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0512 00:16:22.034933 1309329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0512 00:16:22.051803 1309329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0512 00:16:22.068771 1309329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/files/etc/ssl/certs/11241362.pem --> /usr/share/ca-certificates/11241362.pem (1708 bytes)
I0512 00:16:22.085921 1309329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0512 00:16:22.104626 1309329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13639-1120768-60328d4d40a11ac7c18c6243f597bcfbb3050148/.minikube/certs/1124136.pem --> /usr/share/ca-certificates/1124136.pem (1338 bytes)
I0512 00:16:22.121893 1309329 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0512 00:16:22.134411 1309329 ssh_runner.go:195] Run: openssl version
I0512 00:16:22.139433 1309329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11241362.pem && ln -fs /usr/share/ca-certificates/11241362.pem /etc/ssl/certs/11241362.pem"
I0512 00:16:22.146798 1309329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11241362.pem
I0512 00:16:22.149867 1309329 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 11 22:59 /usr/share/ca-certificates/11241362.pem
I0512 00:16:22.149915 1309329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11241362.pem
I0512 00:16:22.154786 1309329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11241362.pem /etc/ssl/certs/3ec20f2e.0"
I0512 00:16:22.161996 1309329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0512 00:16:22.169181 1309329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0512 00:16:22.172120 1309329 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 11 22:53 /usr/share/ca-certificates/minikubeCA.pem
I0512 00:16:22.172168 1309329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0512 00:16:22.177085 1309329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0512 00:16:22.184814 1309329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1124136.pem && ln -fs /usr/share/ca-certificates/1124136.pem /etc/ssl/certs/1124136.pem"
I0512 00:16:22.192023 1309329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1124136.pem
I0512 00:16:22.195021 1309329 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 11 22:59 /usr/share/ca-certificates/1124136.pem
I0512 00:16:22.195068 1309329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1124136.pem
I0512 00:16:22.199730 1309329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1124136.pem /etc/ssl/certs/51391683.0"
I0512 00:16:22.206834 1309329 kubeadm.go:391] StartCluster: {Name:custom-weave-20220512000810-1124136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220512000810-1124136 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0512 00:16:22.206932 1309329 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0512 00:16:22.206988 1309329 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0512 00:16:22.230766 1309329 cri.go:87] found id: ""
I0512 00:16:22.230825 1309329 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0512 00:16:22.237862 1309329 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0512 00:16:22.244824 1309329 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0512 00:16:22.244872 1309329 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0512 00:16:22.251526 1309329 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0512 00:16:22.251579 1309329 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0512 00:16:24.442969 1305383 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0512 00:16:24.446989 1305383 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
I0512 00:16:24.447009 1305383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0512 00:16:24.460532 1305383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0512 00:16:25.321005 1305383 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0512 00:16:25.321103 1305383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:16:25.321109 1305383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0 minikube.k8s.io/name=auto-20220512000808-1124136 minikube.k8s.io/updated_at=2022_05_12T00_16_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:16:25.328349 1305383 ops.go:34] apiserver oom_adj: -16
I0512 00:16:25.384352 1305383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:16:25.963176 1305383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:16:26.462742 1305383 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 00:16:23.041139 1294214 node_ready.go:58] node "pause-20220512001407-1124136" has status "Ready":"False"
I0512 00:16:25.042306 1294214 node_ready.go:58] node "pause-20220512001407-1124136" has status "Ready":"False"
I0512 00:16:27.541792 1294214 node_ready.go:58] node "pause-20220512001407-1124136" has status "Ready":"False"
I0512 00:16:25.008458 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:27.509358 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:29.509718 1250292 node_ready.go:58] node "offline-containerd-20220512000808-1124136" has status "Ready":"False"
I0512 00:16:29.512134 1250292 node_ready.go:38] duration metric: took 4m0.014310089s waiting for node "offline-containerd-20220512000808-1124136" to be "Ready" ...
I0512 00:16:29.513976 1250292 out.go:177]
W0512 00:16:29.515454 1250292 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
W0512 00:16:29.515471 1250292 out.go:239] *
W0512 00:16:29.516253 1250292 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0512 00:16:29.518102 1250292 out.go:177]
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
4b74dc8af645b 6de166512aa22 About a minute ago Running kindnet-cni 1 ab881f1c1abe8
d22d9bcaed974 6de166512aa22 4 minutes ago Exited kindnet-cni 0 ab881f1c1abe8
7efdf6c650932 3c53fa8541f95 4 minutes ago Running kube-proxy 0 c4dc4b7d122b7
5b6bed88e2bee 884d49d6d8c9f 4 minutes ago Running kube-scheduler 0 a6026e96956d1
45123cd385c23 25f8c7f3da61c 4 minutes ago Running etcd 0 1f9086d555aae
0b1b04aeee2ab b0c9e5e4dbb14 4 minutes ago Running kube-controller-manager 0 6242f8dec50c3
a7113050f8667 3fc1d62d65872 4 minutes ago Running kube-apiserver 0 79a4d53d0d5a9
*
* ==> containerd <==
* -- Logs begin at Thu 2022-05-12 00:12:00 UTC, end at Thu 2022-05-12 00:16:30 UTC. --
May 12 00:12:29 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:12:29.690011819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 12 00:12:29 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:12:29.690027156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 12 00:12:29 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:12:29.690254446Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab881f1c1abe8fc5cc88b27fc12a68144dad02234827a3b074b33f1e2a26511f pid=1984 runtime=io.containerd.runc.v2
May 12 00:12:29 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:12:29.691373605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
May 12 00:12:29 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:12:29.691452904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 12 00:12:29 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:12:29.691467508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 12 00:12:29 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:12:29.691841081Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4dc4b7d122b7abc36332f5f9f6c50bd007d4350a92666d0216d2ad6a4db5d0d pid=1987 runtime=io.containerd.runc.v2
May 12 00:12:29 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:12:29.970239394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d5ptt,Uid:503577be-d652-447e-ba59-2fb6866d4f31,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4dc4b7d122b7abc36332f5f9f6c50bd007d4350a92666d0216d2ad6a4db5d0d\""
May 12 00:12:29 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:12:29.973276784Z" level=info msg="CreateContainer within sandbox \"c4dc4b7d122b7abc36332f5f9f6c50bd007d4350a92666d0216d2ad6a4db5d0d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
May 12 00:12:30 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:12:30.076526707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-hhkwb,Uid:e1f1e291-2c25-4edf-8930-e7ae075ce9ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab881f1c1abe8fc5cc88b27fc12a68144dad02234827a3b074b33f1e2a26511f\""
May 12 00:12:30 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:12:30.130241303Z" level=info msg="CreateContainer within sandbox \"ab881f1c1abe8fc5cc88b27fc12a68144dad02234827a3b074b33f1e2a26511f\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
May 12 00:12:30 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:12:30.175981996Z" level=info msg="CreateContainer within sandbox \"c4dc4b7d122b7abc36332f5f9f6c50bd007d4350a92666d0216d2ad6a4db5d0d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7efdf6c650932d26b4bf026aa522117041725cb57b4330208c3d9d3124637dc1\""
May 12 00:12:30 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:12:30.176704482Z" level=info msg="StartContainer for \"7efdf6c650932d26b4bf026aa522117041725cb57b4330208c3d9d3124637dc1\""
May 12 00:12:30 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:12:30.192921663Z" level=info msg="CreateContainer within sandbox \"ab881f1c1abe8fc5cc88b27fc12a68144dad02234827a3b074b33f1e2a26511f\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"d22d9bcaed9748b7f7748d6d4943de819ef4f6e96d7e879e3f59079d6f3e0deb\""
May 12 00:12:30 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:12:30.193586360Z" level=info msg="StartContainer for \"d22d9bcaed9748b7f7748d6d4943de819ef4f6e96d7e879e3f59079d6f3e0deb\""
May 12 00:12:30 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:12:30.388192688Z" level=info msg="StartContainer for \"7efdf6c650932d26b4bf026aa522117041725cb57b4330208c3d9d3124637dc1\" returns successfully"
May 12 00:12:30 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:12:30.477774763Z" level=info msg="StartContainer for \"d22d9bcaed9748b7f7748d6d4943de819ef4f6e96d7e879e3f59079d6f3e0deb\" returns successfully"
May 12 00:15:10 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:15:10.795479378Z" level=info msg="shim disconnected" id=d22d9bcaed9748b7f7748d6d4943de819ef4f6e96d7e879e3f59079d6f3e0deb
May 12 00:15:10 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:15:10.795549602Z" level=warning msg="cleaning up after shim disconnected" id=d22d9bcaed9748b7f7748d6d4943de819ef4f6e96d7e879e3f59079d6f3e0deb namespace=k8s.io
May 12 00:15:10 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:15:10.795563295Z" level=info msg="cleaning up dead shim"
May 12 00:15:10 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:15:10.805835656Z" level=warning msg="cleanup warnings time=\"2022-05-12T00:15:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2344 runtime=io.containerd.runc.v2\n"
May 12 00:15:11 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:15:11.533308575Z" level=info msg="CreateContainer within sandbox \"ab881f1c1abe8fc5cc88b27fc12a68144dad02234827a3b074b33f1e2a26511f\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
May 12 00:15:11 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:15:11.549047058Z" level=info msg="CreateContainer within sandbox \"ab881f1c1abe8fc5cc88b27fc12a68144dad02234827a3b074b33f1e2a26511f\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"4b74dc8af645b4c875227cc64558f1dff7472da4ddd47e1af7809f71f629835d\""
May 12 00:15:11 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:15:11.549680015Z" level=info msg="StartContainer for \"4b74dc8af645b4c875227cc64558f1dff7472da4ddd47e1af7809f71f629835d\""
May 12 00:15:11 offline-containerd-20220512000808-1124136 containerd[547]: time="2022-05-12T00:15:11.764438325Z" level=info msg="StartContainer for \"4b74dc8af645b4c875227cc64558f1dff7472da4ddd47e1af7809f71f629835d\" returns successfully"
*
* ==> describe nodes <==
* Name: offline-containerd-20220512000808-1124136
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=offline-containerd-20220512000808-1124136
kubernetes.io/os=linux
minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0
minikube.k8s.io/name=offline-containerd-20220512000808-1124136
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_05_12T00_12_18_0700
minikube.k8s.io/version=v1.25.2
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 12 May 2022 00:12:13 +0000
Taints: node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: offline-containerd-20220512000808-1124136
AcquireTime: <unset>
RenewTime: Thu, 12 May 2022 00:16:22 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 12 May 2022 00:12:27 +0000 Thu, 12 May 2022 00:12:10 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 12 May 2022 00:12:27 +0000 Thu, 12 May 2022 00:12:10 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 12 May 2022 00:12:27 +0000 Thu, 12 May 2022 00:12:10 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 12 May 2022 00:12:27 +0000 Thu, 12 May 2022 00:12:10 +0000 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Addresses:
InternalIP: 192.168.58.2
Hostname: offline-containerd-20220512000808-1124136
Capacity:
cpu: 8
ephemeral-storage: 304695084Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32873824Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304695084Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32873824Ki
pods: 110
System Info:
Machine ID: 8556a0a9a0e64ba4b825f672d2dce0b9
System UUID: 4ef0964e-27b6-4d61-b4ba-1c4b396aa6ab
Boot ID: 50677ed4-c8e1-4f2e-8134-15cc440b63b9
Kernel Version: 5.13.0-1025-gcp
OS Image: Ubuntu 20.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.6.4
Kubelet Version: v1.23.5
Kube-Proxy Version: v1.23.5
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system etcd-offline-containerd-20220512000808-1124136 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 4m15s
kube-system kindnet-hhkwb 100m (1%!)(MISSING) 100m (1%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 4m1s
kube-system kube-apiserver-offline-containerd-20220512000808-1124136 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m13s
kube-system kube-controller-manager-offline-containerd-20220512000808-1124136 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m14s
kube-system kube-proxy-d5ptt 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m1s
kube-system kube-scheduler-offline-containerd-20220512000808-1124136 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m13s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (9%!)(MISSING) 100m (1%!)(MISSING)
memory 150Mi (0%!)(MISSING) 50Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m kube-proxy
Normal NodeHasSufficientMemory 4m21s (x4 over 4m21s) kubelet Node offline-containerd-20220512000808-1124136 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m21s (x4 over 4m21s) kubelet Node offline-containerd-20220512000808-1124136 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m21s (x4 over 4m21s) kubelet Node offline-containerd-20220512000808-1124136 status is now: NodeHasSufficientPID
Normal Starting 4m14s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m14s kubelet Node offline-containerd-20220512000808-1124136 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m14s kubelet Node offline-containerd-20220512000808-1124136 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m14s kubelet Node offline-containerd-20220512000808-1124136 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m13s kubelet Updated Node Allocatable limit across pods
*
* ==> dmesg <==
* [ +0.000007] ll header: 00000000: 02 42 4a 29 4f 2a 02 42 c0 a8 31 02 08 00
[ +5.003921] IPv4: martian source 10.244.0.26 from 10.244.1.2, on dev br-4d9e51725c99
[ +0.000008] ll header: 00000000: 02 42 4a 29 4f 2a 02 42 c0 a8 31 02 08 00
[ +5.004553] IPv4: martian source 10.244.0.26 from 10.244.1.2, on dev br-4d9e51725c99
[ +0.000005] ll header: 00000000: 02 42 4a 29 4f 2a 02 42 c0 a8 31 02 08 00
[ +5.001546] IPv4: martian source 10.244.0.26 from 10.244.1.2, on dev br-4d9e51725c99
[ +0.000006] ll header: 00000000: 02 42 4a 29 4f 2a 02 42 c0 a8 31 02 08 00
[May11 23:59] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth950ca561
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 4c 56 6c aa eb 08 06
[ +0.330965] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth2f5d6aad
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff c6 5a ea 42 89 db 08 06
[ +0.215543] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth950ca561
[May12 00:00] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev veth6f43bf2f
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 57 34 94 c3 de 08 06
[May12 00:02] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth0437a76d
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 55 ab ad 31 e5 08 06
[ +0.472536] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev veth0437a76d
[May12 00:03] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev vethaa8bfa22
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff ba bb 41 5a 7a c3 08 06
[May12 00:05] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethb4a3839a
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff c6 46 6f ac a2 57 08 06
[May12 00:06] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth301656b7
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 4e cd e2 14 6e 07 08 06
[May12 00:12] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth7d7a3e6c
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff ca a5 51 09 6e 8a 08 06
*
* ==> etcd [45123cd385c23a9f9b04a41a979028cfe824f42c2c58acf88ec5fdf5982b353c] <==
* {"level":"info","ts":"2022-05-12T00:13:47.329Z","caller":"traceutil/trace.go:171","msg":"trace[1488214672] linearizableReadLoop","detail":"{readStateIndex:517; appliedIndex:517; }","duration":"189.996371ms","start":"2022-05-12T00:13:47.139Z","end":"2022-05-12T00:13:47.329Z","steps":["trace[1488214672] 'read index received' (duration: 189.988264ms)","trace[1488214672] 'applied index is now lower than readState.Index' (duration: 6.665µs)"],"step_count":2}
{"level":"warn","ts":"2022-05-12T00:13:47.516Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"376.785704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/storage-provisioner.16ee3325b3d034a7\" ","response":"range_response_count:1 size:724"}
{"level":"info","ts":"2022-05-12T00:13:47.516Z","caller":"traceutil/trace.go:171","msg":"trace[1880169309] range","detail":"{range_begin:/registry/events/kube-system/storage-provisioner.16ee3325b3d034a7; range_end:; response_count:1; response_revision:486; }","duration":"376.877344ms","start":"2022-05-12T00:13:47.139Z","end":"2022-05-12T00:13:47.516Z","steps":["trace[1880169309] 'agreement among raft nodes before linearized reading' (duration: 190.130148ms)","trace[1880169309] 'range keys from in-memory index tree' (duration: 186.6152ms)"],"step_count":2}
{"level":"warn","ts":"2022-05-12T00:13:47.516Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T00:13:47.139Z","time spent":"376.93302ms","remote":"127.0.0.1:46224","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":747,"request content":"key:\"/registry/events/kube-system/storage-provisioner.16ee3325b3d034a7\" "}
{"level":"warn","ts":"2022-05-12T00:13:50.950Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"443.515607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/offline-containerd-20220512000808-1124136\" ","response":"range_response_count:1 size:4803"}
{"level":"info","ts":"2022-05-12T00:13:50.950Z","caller":"traceutil/trace.go:171","msg":"trace[2084653185] range","detail":"{range_begin:/registry/minions/offline-containerd-20220512000808-1124136; range_end:; response_count:1; response_revision:488; }","duration":"443.597559ms","start":"2022-05-12T00:13:50.506Z","end":"2022-05-12T00:13:50.950Z","steps":["trace[2084653185] 'range keys from in-memory index tree' (duration: 443.375521ms)"],"step_count":1}
{"level":"warn","ts":"2022-05-12T00:13:50.950Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T00:13:50.506Z","time spent":"443.650925ms","remote":"127.0.0.1:46254","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":1,"response size":4826,"request content":"key:\"/registry/minions/offline-containerd-20220512000808-1124136\" "}
{"level":"warn","ts":"2022-05-12T00:13:50.950Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"341.505394ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-05-12T00:13:50.950Z","caller":"traceutil/trace.go:171","msg":"trace[543126382] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:488; }","duration":"341.720204ms","start":"2022-05-12T00:13:50.608Z","end":"2022-05-12T00:13:50.950Z","steps":["trace[543126382] 'range keys from in-memory index tree' (duration: 341.411827ms)"],"step_count":1}
{"level":"warn","ts":"2022-05-12T00:13:50.950Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T00:13:50.608Z","time spent":"341.777251ms","remote":"127.0.0.1:46380","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
{"level":"info","ts":"2022-05-12T00:13:55.241Z","caller":"traceutil/trace.go:171","msg":"trace[1326468987] linearizableReadLoop","detail":"{readStateIndex:521; appliedIndex:521; }","duration":"234.857089ms","start":"2022-05-12T00:13:55.006Z","end":"2022-05-12T00:13:55.241Z","steps":["trace[1326468987] 'read index received' (duration: 234.828795ms)","trace[1326468987] 'applied index is now lower than readState.Index' (duration: 26.569µs)"],"step_count":2}
{"level":"warn","ts":"2022-05-12T00:13:55.246Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"239.926022ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/offline-containerd-20220512000808-1124136\" ","response":"range_response_count:1 size:4803"}
{"level":"info","ts":"2022-05-12T00:13:55.246Z","caller":"traceutil/trace.go:171","msg":"trace[178278448] range","detail":"{range_begin:/registry/minions/offline-containerd-20220512000808-1124136; range_end:; response_count:1; response_revision:488; }","duration":"240.02284ms","start":"2022-05-12T00:13:55.006Z","end":"2022-05-12T00:13:55.246Z","steps":["trace[178278448] 'agreement among raft nodes before linearized reading' (duration: 235.002548ms)"],"step_count":1}
{"level":"warn","ts":"2022-05-12T00:13:55.246Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"131.201303ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
{"level":"info","ts":"2022-05-12T00:13:55.246Z","caller":"traceutil/trace.go:171","msg":"trace[1846269069] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; response_count:0; response_revision:489; }","duration":"131.272886ms","start":"2022-05-12T00:13:55.115Z","end":"2022-05-12T00:13:55.246Z","steps":["trace[1846269069] 'agreement among raft nodes before linearized reading' (duration: 131.150004ms)"],"step_count":1}
{"level":"info","ts":"2022-05-12T00:13:55.246Z","caller":"traceutil/trace.go:171","msg":"trace[857979696] transaction","detail":"{read_only:false; response_revision:489; number_of_response:1; }","duration":"239.964736ms","start":"2022-05-12T00:13:55.006Z","end":"2022-05-12T00:13:55.246Z","steps":["trace[857979696] 'process raft request' (duration: 234.688035ms)"],"step_count":1}
{"level":"warn","ts":"2022-05-12T00:14:05.220Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"190.272012ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
{"level":"info","ts":"2022-05-12T00:14:05.220Z","caller":"traceutil/trace.go:171","msg":"trace[1604171128] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:491; }","duration":"190.350099ms","start":"2022-05-12T00:14:05.029Z","end":"2022-05-12T00:14:05.220Z","steps":["trace[1604171128] 'agreement among raft nodes before linearized reading' (duration: 85.455426ms)","trace[1604171128] 'range keys from in-memory index tree' (duration: 104.776594ms)"],"step_count":2}
{"level":"warn","ts":"2022-05-12T00:14:15.197Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"191.013728ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/offline-containerd-20220512000808-1124136\" ","response":"range_response_count:1 size:4803"}
{"level":"info","ts":"2022-05-12T00:14:15.197Z","caller":"traceutil/trace.go:171","msg":"trace[1652948088] range","detail":"{range_begin:/registry/minions/offline-containerd-20220512000808-1124136; range_end:; response_count:1; response_revision:492; }","duration":"191.125863ms","start":"2022-05-12T00:14:15.006Z","end":"2022-05-12T00:14:15.197Z","steps":["trace[1652948088] 'agreement among raft nodes before linearized reading' (duration: 90.109929ms)","trace[1652948088] 'range keys from in-memory index tree' (duration: 100.861242ms)"],"step_count":2}
{"level":"warn","ts":"2022-05-12T00:14:15.198Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.950923ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3238511124547919272 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.58.2\" mod_revision:491 > success:<request_put:<key:\"/registry/masterleases/192.168.58.2\" value_size:67 lease:3238511124547919270 >> failure:<request_range:<key:\"/registry/masterleases/192.168.58.2\" > >>","response":"size:16"}
{"level":"info","ts":"2022-05-12T00:14:15.198Z","caller":"traceutil/trace.go:171","msg":"trace[243072154] transaction","detail":"{read_only:false; response_revision:493; number_of_response:1; }","duration":"191.115929ms","start":"2022-05-12T00:14:15.007Z","end":"2022-05-12T00:14:15.198Z","steps":["trace[243072154] 'process raft request' (duration: 89.869881ms)","trace[243072154] 'compare' (duration: 100.827512ms)"],"step_count":2}
{"level":"info","ts":"2022-05-12T00:15:05.134Z","caller":"traceutil/trace.go:171","msg":"trace[1891914128] transaction","detail":"{read_only:false; response_revision:505; number_of_response:1; }","duration":"125.736148ms","start":"2022-05-12T00:15:05.009Z","end":"2022-05-12T00:15:05.134Z","steps":["trace[1891914128] 'process raft request' (duration: 114.22327ms)","trace[1891914128] 'compare' (duration: 11.385241ms)"],"step_count":2}
{"level":"warn","ts":"2022-05-12T00:16:15.170Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.52099ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3238511124547919940 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.58.2\" mod_revision:526 > success:<request_put:<key:\"/registry/masterleases/192.168.58.2\" value_size:67 lease:3238511124547919938 >> failure:<request_range:<key:\"/registry/masterleases/192.168.58.2\" > >>","response":"size:16"}
{"level":"info","ts":"2022-05-12T00:16:15.170Z","caller":"traceutil/trace.go:171","msg":"trace[844094697] transaction","detail":"{read_only:false; response_revision:528; number_of_response:1; }","duration":"159.503012ms","start":"2022-05-12T00:16:15.011Z","end":"2022-05-12T00:16:15.170Z","steps":["trace[844094697] 'process raft request' (duration: 58.817092ms)","trace[844094697] 'compare' (duration: 100.41538ms)"],"step_count":2}
*
* ==> kernel <==
* 00:16:30 up 12:59, 0 users, load average: 4.56, 3.39, 1.93
Linux offline-containerd-20220512000808-1124136 5.13.0-1025-gcp #30~20.04.1-Ubuntu SMP Tue Apr 26 03:01:25 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.4 LTS"
*
* ==> kube-apiserver [a7113050f8667db5680cf5b5f51ff4f3096d4f8abb5ce02e91bbba7dab7dba7d] <==
* I0512 00:12:13.597170 1 shared_informer.go:247] Caches are synced for node_authorizer
I0512 00:12:13.659770 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0512 00:12:13.659820 1 apf_controller.go:322] Running API Priority and Fairness config worker
I0512 00:12:13.660162 1 shared_informer.go:247] Caches are synced for crd-autoregister
I0512 00:12:13.660320 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0512 00:12:13.660327 1 cache.go:39] Caches are synced for autoregister controller
I0512 00:12:14.435529 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0512 00:12:14.435560 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0512 00:12:14.444075 1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
I0512 00:12:14.447832 1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
I0512 00:12:14.447853 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
I0512 00:12:14.842024 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0512 00:12:14.877754 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0512 00:12:15.006519 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
W0512 00:12:15.015059 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
I0512 00:12:15.016517 1 controller.go:611] quota admission added evaluator for: endpoints
I0512 00:12:15.020711 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0512 00:12:15.586997 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0512 00:12:16.721259 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0512 00:12:16.729390 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
I0512 00:12:16.739524 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0512 00:12:16.920899 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0512 00:12:28.996135 1 controller.go:611] quota admission added evaluator for: replicasets.apps
I0512 00:12:29.296158 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
I0512 00:12:30.457642 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
*
* ==> kube-controller-manager [0b1b04aeee2abec4f871e2cb1de03023cb288c9caaf43f762793cfc91a151852] <==
* I0512 00:12:28.506930 1 range_allocator.go:173] Starting range CIDR allocator
I0512 00:12:28.506939 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
I0512 00:12:28.506952 1 shared_informer.go:247] Caches are synced for cidrallocator
I0512 00:12:28.518431 1 range_allocator.go:374] Set node offline-containerd-20220512000808-1124136 PodCIDR to [10.244.0.0/24]
I0512 00:12:28.518800 1 shared_informer.go:247] Caches are synced for GC
I0512 00:12:28.559695 1 shared_informer.go:247] Caches are synced for endpoint_slice
I0512 00:12:28.638468 1 shared_informer.go:247] Caches are synced for PVC protection
I0512 00:12:28.642858 1 shared_informer.go:247] Caches are synced for cronjob
I0512 00:12:28.648162 1 shared_informer.go:247] Caches are synced for resource quota
I0512 00:12:28.664014 1 shared_informer.go:247] Caches are synced for attach detach
I0512 00:12:28.668319 1 shared_informer.go:247] Caches are synced for resource quota
I0512 00:12:28.686048 1 shared_informer.go:247] Caches are synced for ephemeral
I0512 00:12:28.686081 1 shared_informer.go:247] Caches are synced for expand
I0512 00:12:28.687207 1 shared_informer.go:247] Caches are synced for stateful set
I0512 00:12:28.693978 1 shared_informer.go:247] Caches are synced for persistent volume
I0512 00:12:28.998201 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
I0512 00:12:29.010394 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
I0512 00:12:29.067669 1 shared_informer.go:247] Caches are synced for garbage collector
I0512 00:12:29.127261 1 shared_informer.go:247] Caches are synced for garbage collector
I0512 00:12:29.127303 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0512 00:12:29.303040 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-d5ptt"
I0512 00:12:29.305725 1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hhkwb"
I0512 00:12:29.454914 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-pjx5r"
I0512 00:12:29.461905 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-gdp8x"
I0512 00:12:29.488105 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-pjx5r"
*
* ==> kube-proxy [7efdf6c650932d26b4bf026aa522117041725cb57b4330208c3d9d3124637dc1] <==
* I0512 00:12:30.431806 1 node.go:163] Successfully retrieved node IP: 192.168.58.2
I0512 00:12:30.431863 1 server_others.go:138] "Detected node IP" address="192.168.58.2"
I0512 00:12:30.431900 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0512 00:12:30.454697 1 server_others.go:206] "Using iptables Proxier"
I0512 00:12:30.454734 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0512 00:12:30.454743 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0512 00:12:30.454759 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0512 00:12:30.455214 1 server.go:656] "Version info" version="v1.23.5"
I0512 00:12:30.455910 1 config.go:226] "Starting endpoint slice config controller"
I0512 00:12:30.455935 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0512 00:12:30.455936 1 config.go:317] "Starting service config controller"
I0512 00:12:30.455954 1 shared_informer.go:240] Waiting for caches to sync for service config
I0512 00:12:30.556869 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0512 00:12:30.556884 1 shared_informer.go:247] Caches are synced for service config
*
* ==> kube-scheduler [5b6bed88e2bee60722597c40378e2aed5439eb0d1e8dd77e590b73b885db63a1] <==
* W0512 00:12:13.583886 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0512 00:12:13.584991 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0512 00:12:13.583915 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0512 00:12:13.585026 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0512 00:12:13.584220 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0512 00:12:13.585036 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0512 00:12:13.585066 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0512 00:12:13.585040 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0512 00:12:13.584441 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0512 00:12:13.585120 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0512 00:12:13.584591 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0512 00:12:13.585156 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0512 00:12:13.584659 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0512 00:12:13.585178 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0512 00:12:13.583757 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0512 00:12:13.585196 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0512 00:12:13.587036 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0512 00:12:13.587233 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0512 00:12:14.475714 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0512 00:12:14.475785 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0512 00:12:14.475714 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0512 00:12:14.475825 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0512 00:12:14.562661 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0512 00:12:14.562705 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
I0512 00:12:14.976523 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Thu 2022-05-12 00:12:00 UTC, end at Thu 2022-05-12 00:16:30 UTC. --
May 12 00:14:32 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:14:32.340301 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:14:37 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:14:37.341250 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:14:42 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:14:42.342879 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:14:47 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:14:47.344223 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:14:52 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:14:52.344938 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:14:57 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:14:57.346632 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:15:02 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:15:02.347897 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:15:07 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:15:07.349096 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:15:11 offline-containerd-20220512000808-1124136 kubelet[1523]: I0512 00:15:11.531139 1523 scope.go:110] "RemoveContainer" containerID="d22d9bcaed9748b7f7748d6d4943de819ef4f6e96d7e879e3f59079d6f3e0deb"
May 12 00:15:12 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:15:12.350365 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:15:17 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:15:17.351712 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:15:22 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:15:22.353565 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:15:27 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:15:27.354907 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:15:32 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:15:32.355671 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:15:37 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:15:37.356387 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:15:42 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:15:42.358140 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:15:47 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:15:47.359725 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:15:52 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:15:52.361318 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:15:57 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:15:57.362600 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:16:02 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:16:02.363671 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:16:07 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:16:07.365448 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:16:12 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:16:12.366189 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:16:17 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:16:17.366908 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:16:22 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:16:22.368419 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
May 12 00:16:27 offline-containerd-20220512000808-1124136 kubelet[1523]: E0512 00:16:27.369380 1523 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p offline-containerd-20220512000808-1124136 -n offline-containerd-20220512000808-1124136
helpers_test.go:261: (dbg) Run: kubectl --context offline-containerd-20220512000808-1124136 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-gdp8x storage-provisioner
helpers_test.go:272: ======> post-mortem[TestOffline]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context offline-containerd-20220512000808-1124136 describe pod coredns-64897985d-gdp8x storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context offline-containerd-20220512000808-1124136 describe pod coredns-64897985d-gdp8x storage-provisioner: exit status 1 (77.339482ms)
** stderr **
Error from server (NotFound): pods "coredns-64897985d-gdp8x" not found
Error from server (NotFound): pods "storage-provisioner" not found
** /stderr **
helpers_test.go:277: kubectl --context offline-containerd-20220512000808-1124136 describe pod coredns-64897985d-gdp8x storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "offline-containerd-20220512000808-1124136" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p offline-containerd-20220512000808-1124136
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20220512000808-1124136: (2.486847499s)
--- FAIL: TestOffline (505.04s)