=== RUN TestPreload
preload_test.go:44: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-170735 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.24.4
E1107 17:07:54.188048 51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/addons-164543/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-170735 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.24.4: (51.205208309s)
preload_test.go:57: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-170735 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:67: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-170735 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --container-runtime=containerd --kubernetes-version=v1.24.6
E1107 17:09:17.236907 51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/addons-164543/client.crt: no such file or directory
E1107 17:09:22.808336 51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/ingress-addon-legacy-165256/client.crt: no such file or directory
E1107 17:12:04.641419 51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
E1107 17:12:54.187718 51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/addons-164543/client.crt: no such file or directory
E1107 17:13:27.687553 51176 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/functional-165015/client.crt: no such file or directory
preload_test.go:67: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p test-preload-170735 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --container-runtime=containerd --kubernetes-version=v1.24.6: exit status 81 (5m4.785054882s)
-- stdout --
* [test-preload-170735] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=15310
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
* Using the docker driver based on existing profile
* Starting control plane node test-preload-170735 in cluster test-preload-170735
* Pulling base image ...
* Downloading Kubernetes v1.24.6 preload ...
* Updating the running docker "test-preload-170735" container ...
* Preparing Kubernetes v1.24.6 on containerd 1.6.9 ...
* Configuring CNI (Container Networking Interface) ...
X Problems detected in kubelet:
Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.926231 4309 projected.go:192] Error preparing data for projected volume kube-api-access-l9w87 for pod kube-system/kindnet-fh9w9: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.926837 4309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eca84e65-57b5-4cc9-b42a-0f991c91ffe7-kube-api-access-l9w87 podName:eca84e65-57b5-4cc9-b42a-0f991c91ffe7 nodeName:}" failed. No retries permitted until 2022-11-07 17:09:15.926808887 +0000 UTC m=+10.477882581 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l9w87" (UniqueName: "kubernetes.io/projected/eca84e65-57b5-4cc9-b42a-0f991c91ffe7-kube-api-access-l9w87") pod "kindnet-fh9w9" (UID: "eca84e65-57b5-4cc9-b42a-0f991c91ffe7") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
Nov 07 17:09:13 test-preload-170735 kubelet[4309]: W1107 17:09:13.927004 4309 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
-- /stdout --
** stderr **
I1107 17:08:27.904911 165743 out.go:296] Setting OutFile to fd 1 ...
I1107 17:08:27.905045 165743 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 17:08:27.905060 165743 out.go:309] Setting ErrFile to fd 2...
I1107 17:08:27.905068 165743 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 17:08:27.905197 165743 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-44720/.minikube/bin
I1107 17:08:27.905863 165743 out.go:303] Setting JSON to false
I1107 17:08:27.907218 165743 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10261,"bootTime":1667830647,"procs":524,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1107 17:08:27.907299 165743 start.go:126] virtualization: kvm guest
I1107 17:08:27.910260 165743 out.go:177] * [test-preload-170735] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
I1107 17:08:27.912717 165743 out.go:177] - MINIKUBE_LOCATION=15310
I1107 17:08:27.912644 165743 notify.go:220] Checking for updates...
I1107 17:08:27.914611 165743 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1107 17:08:27.916178 165743 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig
I1107 17:08:27.917748 165743 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube
I1107 17:08:27.919131 165743 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1107 17:08:27.921065 165743 config.go:180] Loaded profile config "test-preload-170735": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I1107 17:08:27.923047 165743 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
I1107 17:08:27.924546 165743 driver.go:365] Setting default libvirt URI to qemu:///system
I1107 17:08:27.952793 165743 docker.go:137] docker version: linux-20.10.21
I1107 17:08:27.952897 165743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1107 17:08:28.051499 165743 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:39 SystemTime:2022-11-07 17:08:27.973134397 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1107 17:08:28.051613 165743 docker.go:254] overlay module found
I1107 17:08:28.054907 165743 out.go:177] * Using the docker driver based on existing profile
I1107 17:08:28.056422 165743 start.go:282] selected driver: docker
I1107 17:08:28.056442 165743 start.go:808] validating driver "docker" against &{Name:test-preload-170735 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-170735 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1107 17:08:28.056553 165743 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1107 17:08:28.057351 165743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1107 17:08:28.151882 165743 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:39 SystemTime:2022-11-07 17:08:28.076276154 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1107 17:08:28.152201 165743 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1107 17:08:28.152232 165743 cni.go:95] Creating CNI manager for ""
I1107 17:08:28.152241 165743 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I1107 17:08:28.152260 165743 start_flags.go:317] config:
{Name:test-preload-170735 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-170735 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1107 17:08:28.155619 165743 out.go:177] * Starting control plane node test-preload-170735 in cluster test-preload-170735
I1107 17:08:28.156954 165743 cache.go:120] Beginning downloading kic base image for docker with containerd
I1107 17:08:28.158499 165743 out.go:177] * Pulling base image ...
I1107 17:08:28.159890 165743 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1107 17:08:28.159983 165743 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
I1107 17:08:28.181208 165743 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
I1107 17:08:28.181243 165743 cache.go:57] Caching tarball of preloaded images
I1107 17:08:28.181535 165743 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1107 17:08:28.183696 165743 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
I1107 17:08:28.182675 165743 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
I1107 17:08:28.183727 165743 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
I1107 17:08:28.185282 165743 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1107 17:08:28.211318 165743 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
I1107 17:08:32.100806 165743 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1107 17:08:32.100913 165743 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1107 17:08:33.024863 165743 cache.go:60] Finished verifying existence of preloaded tar for v1.24.6 on containerd
I1107 17:08:33.025006 165743 profile.go:148] Saving config to /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/config.json ...
I1107 17:08:33.025200 165743 cache.go:208] Successfully downloaded all kic artifacts
I1107 17:08:33.025245 165743 start.go:364] acquiring machines lock for test-preload-170735: {Name:mkeed53a7896dfd155258ca3d33f2ba7f27b6e3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1107 17:08:33.025355 165743 start.go:368] acquired machines lock for "test-preload-170735" in 83.257µs
I1107 17:08:33.025378 165743 start.go:96] Skipping create...Using existing machine configuration
I1107 17:08:33.025389 165743 fix.go:55] fixHost starting:
I1107 17:08:33.025604 165743 cli_runner.go:164] Run: docker container inspect test-preload-170735 --format={{.State.Status}}
I1107 17:08:33.047785 165743 fix.go:103] recreateIfNeeded on test-preload-170735: state=Running err=<nil>
W1107 17:08:33.047814 165743 fix.go:129] unexpected machine state, will restart: <nil>
I1107 17:08:33.051368 165743 out.go:177] * Updating the running docker "test-preload-170735" container ...
I1107 17:08:33.053014 165743 machine.go:88] provisioning docker machine ...
I1107 17:08:33.053055 165743 ubuntu.go:169] provisioning hostname "test-preload-170735"
I1107 17:08:33.053104 165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
I1107 17:08:33.073975 165743 main.go:134] libmachine: Using SSH client type: native
I1107 17:08:33.074165 165743 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49277 <nil> <nil>}
I1107 17:08:33.074183 165743 main.go:134] libmachine: About to run SSH command:
sudo hostname test-preload-170735 && echo "test-preload-170735" | sudo tee /etc/hostname
I1107 17:08:33.197853 165743 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-170735
I1107 17:08:33.197933 165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
I1107 17:08:33.220254 165743 main.go:134] libmachine: Using SSH client type: native
I1107 17:08:33.220408 165743 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49277 <nil> <nil>}
I1107 17:08:33.220428 165743 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\stest-preload-170735' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-170735/g' /etc/hosts;
else
echo '127.0.1.1 test-preload-170735' | sudo tee -a /etc/hosts;
fi
fi
I1107 17:08:33.333808 165743 main.go:134] libmachine: SSH cmd err, output: <nil>:
I1107 17:08:33.333842 165743 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15310-44720/.minikube CaCertPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15310-44720/.minikube}
I1107 17:08:33.333861 165743 ubuntu.go:177] setting up certificates
I1107 17:08:33.333869 165743 provision.go:83] configureAuth start
I1107 17:08:33.333914 165743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-170735
I1107 17:08:33.355318 165743 provision.go:138] copyHostCerts
I1107 17:08:33.355367 165743 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-44720/.minikube/ca.pem, removing ...
I1107 17:08:33.355376 165743 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-44720/.minikube/ca.pem
I1107 17:08:33.355441 165743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15310-44720/.minikube/ca.pem (1082 bytes)
I1107 17:08:33.355534 165743 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-44720/.minikube/cert.pem, removing ...
I1107 17:08:33.355545 165743 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-44720/.minikube/cert.pem
I1107 17:08:33.355581 165743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15310-44720/.minikube/cert.pem (1123 bytes)
I1107 17:08:33.355641 165743 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-44720/.minikube/key.pem, removing ...
I1107 17:08:33.355651 165743 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-44720/.minikube/key.pem
I1107 17:08:33.355689 165743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15310-44720/.minikube/key.pem (1679 bytes)
I1107 17:08:33.355768 165743 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15310-44720/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca-key.pem org=jenkins.test-preload-170735 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-170735]
I1107 17:08:33.436719 165743 provision.go:172] copyRemoteCerts
I1107 17:08:33.436773 165743 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1107 17:08:33.436826 165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
I1107 17:08:33.458416 165743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/test-preload-170735/id_rsa Username:docker}
I1107 17:08:33.541280 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1107 17:08:33.558205 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I1107 17:08:33.574372 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1107 17:08:33.590572 165743 provision.go:86] duration metric: configureAuth took 256.685343ms
I1107 17:08:33.590604 165743 ubuntu.go:193] setting minikube options for container-runtime
I1107 17:08:33.590765 165743 config.go:180] Loaded profile config "test-preload-170735": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
I1107 17:08:33.590782 165743 machine.go:91] provisioned docker machine in 537.75012ms
I1107 17:08:33.590791 165743 start.go:300] post-start starting for "test-preload-170735" (driver="docker")
I1107 17:08:33.590802 165743 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1107 17:08:33.590840 165743 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1107 17:08:33.590874 165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
I1107 17:08:33.613972 165743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/test-preload-170735/id_rsa Username:docker}
I1107 17:08:33.697134 165743 ssh_runner.go:195] Run: cat /etc/os-release
I1107 17:08:33.699654 165743 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1107 17:08:33.699688 165743 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1107 17:08:33.699706 165743 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1107 17:08:33.699715 165743 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I1107 17:08:33.699735 165743 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-44720/.minikube/addons for local assets ...
I1107 17:08:33.699785 165743 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-44720/.minikube/files for local assets ...
I1107 17:08:33.699859 165743 filesync.go:149] local asset: /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem -> 511762.pem in /etc/ssl/certs
I1107 17:08:33.699972 165743 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1107 17:08:33.706647 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem --> /etc/ssl/certs/511762.pem (1708 bytes)
I1107 17:08:33.723587 165743 start.go:303] post-start completed in 132.77869ms
I1107 17:08:33.723655 165743 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1107 17:08:33.723701 165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
I1107 17:08:33.745091 165743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/test-preload-170735/id_rsa Username:docker}
I1107 17:08:33.826766 165743 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1107 17:08:33.830752 165743 fix.go:57] fixHost completed within 805.356487ms
I1107 17:08:33.830779 165743 start.go:83] releasing machines lock for "test-preload-170735", held for 805.406949ms
I1107 17:08:33.830865 165743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-170735
I1107 17:08:33.851188 165743 ssh_runner.go:195] Run: systemctl --version
I1107 17:08:33.851233 165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
I1107 17:08:33.851246 165743 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I1107 17:08:33.851299 165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
I1107 17:08:33.874050 165743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/test-preload-170735/id_rsa Username:docker}
I1107 17:08:33.874539 165743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/test-preload-170735/id_rsa Username:docker}
I1107 17:08:33.970640 165743 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1107 17:08:33.980208 165743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1107 17:08:33.989283 165743 docker.go:189] disabling docker service ...
I1107 17:08:33.989328 165743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1107 17:08:33.998251 165743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1107 17:08:34.006544 165743 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1107 17:08:34.105872 165743 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1107 17:08:34.199735 165743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1107 17:08:34.208838 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1107 17:08:34.221138 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
I1107 17:08:34.228758 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
I1107 17:08:34.237433 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
I1107 17:08:34.245113 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
I1107 17:08:34.252514 165743 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1107 17:08:34.258488 165743 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1107 17:08:34.264983 165743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1107 17:08:34.355600 165743 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1107 17:08:34.426498 165743 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
I1107 17:08:34.426584 165743 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1107 17:08:34.431077 165743 start.go:472] Will wait 60s for crictl version
I1107 17:08:34.431141 165743 ssh_runner.go:195] Run: sudo crictl version
I1107 17:08:34.463332 165743 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:
stderr:
time="2022-11-07T17:08:34Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
I1107 17:08:45.511931 165743 ssh_runner.go:195] Run: sudo crictl version
I1107 17:08:45.534402 165743 start.go:481] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.6.9
RuntimeApiVersion: v1alpha2
I1107 17:08:45.534456 165743 ssh_runner.go:195] Run: containerd --version
I1107 17:08:45.557129 165743 ssh_runner.go:195] Run: containerd --version
I1107 17:08:45.581034 165743 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.9 ...
I1107 17:08:45.583252 165743 cli_runner.go:164] Run: docker network inspect test-preload-170735 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1107 17:08:45.604171 165743 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I1107 17:08:45.607584 165743 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1107 17:08:45.607660 165743 ssh_runner.go:195] Run: sudo crictl images --output json
I1107 17:08:45.629696 165743 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
I1107 17:08:45.629765 165743 ssh_runner.go:195] Run: which lz4
I1107 17:08:45.632520 165743 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1107 17:08:45.635397 165743 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I1107 17:08:45.635419 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
I1107 17:08:46.608662 165743 containerd.go:496] Took 0.976169 seconds to copy over tarball
I1107 17:08:46.608757 165743 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I1107 17:08:49.268239 165743 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.659458437s)
I1107 17:08:49.268269 165743 containerd.go:503] Took 2.659548 seconds t extract the tarball
I1107 17:08:49.268278 165743 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1107 17:08:49.290385 165743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1107 17:08:49.394503 165743 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1107 17:08:49.483535 165743 ssh_runner.go:195] Run: sudo crictl images --output json
I1107 17:08:49.508155 165743 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.6 k8s.gcr.io/kube-controller-manager:v1.24.6 k8s.gcr.io/kube-scheduler:v1.24.6 k8s.gcr.io/kube-proxy:v1.24.6 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
I1107 17:08:49.508249 165743 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I1107 17:08:49.508261 165743 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.6
I1107 17:08:49.508303 165743 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
I1107 17:08:49.508328 165743 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.6
I1107 17:08:49.508333 165743 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
I1107 17:08:49.508363 165743 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.6
I1107 17:08:49.508413 165743 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.6
I1107 17:08:49.508304 165743 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
I1107 17:08:49.509646 165743 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.6: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.6
I1107 17:08:49.509674 165743 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
I1107 17:08:49.509722 165743 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.6: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.6
I1107 17:08:49.509649 165743 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I1107 17:08:49.509638 165743 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.6: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.6
I1107 17:08:49.509650 165743 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
I1107 17:08:49.509774 165743 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
I1107 17:08:49.509643 165743 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.6: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.6
I1107 17:08:49.721200 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
I1107 17:08:49.721693 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
I1107 17:08:49.738860 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
I1107 17:08:49.739213 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.6"
I1107 17:08:49.747795 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
I1107 17:08:49.758483 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.6"
I1107 17:08:49.761130 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6"
I1107 17:08:49.977049 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6"
I1107 17:08:50.610195 165743 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
I1107 17:08:50.610249 165743 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
I1107 17:08:50.610292 165743 ssh_runner.go:195] Run: which crictl
I1107 17:08:50.614352 165743 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
I1107 17:08:50.614406 165743 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
I1107 17:08:50.614453 165743 ssh_runner.go:195] Run: which crictl
I1107 17:08:50.705332 165743 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
I1107 17:08:50.705390 165743 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
I1107 17:08:50.705338 165743 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.6" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.6" does not exist at hash "0bb39497ab33bb5f8aaff88ced53a5fcd360fcf5da647609619d4f5c8f1483d2" in container runtime
I1107 17:08:50.705434 165743 ssh_runner.go:195] Run: which crictl
I1107 17:08:50.705452 165743 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.6
I1107 17:08:50.705619 165743 ssh_runner.go:195] Run: which crictl
I1107 17:08:50.717541 165743 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
I1107 17:08:50.717591 165743 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I1107 17:08:50.717638 165743 ssh_runner.go:195] Run: which crictl
I1107 17:08:50.719439 165743 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.6" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.6" does not exist at hash "860f263331c9513ddab44d4d8a9a4a7304313b3aa0776decc1d7fc6acdd69ab0" in container runtime
I1107 17:08:50.719499 165743 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.6
I1107 17:08:50.719544 165743 ssh_runner.go:195] Run: which crictl
I1107 17:08:50.719689 165743 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.6" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.6" does not exist at hash "c6c20157a42337ecb7675be59e1dc34bc5a91288c7eeac1e30ec97767a9055eb" in container runtime
I1107 17:08:50.719723 165743 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.6
I1107 17:08:50.719758 165743 ssh_runner.go:195] Run: which crictl
I1107 17:08:50.814270 165743 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.6" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.6" does not exist at hash "c786c777a4e1c21907e77042428837645fa382d3bd14925cf78f0d163d6d332e" in container runtime
I1107 17:08:50.814353 165743 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.6
I1107 17:08:50.814361 165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
I1107 17:08:50.814382 165743 ssh_runner.go:195] Run: which crictl
I1107 17:08:50.814394 165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
I1107 17:08:50.814410 165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6
I1107 17:08:50.814414 165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1107 17:08:50.814427 165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6
I1107 17:08:50.814384 165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
I1107 17:08:50.814449 165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6
I1107 17:08:52.582624 165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6: (1.768192619s)
I1107 17:08:52.582662 165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
I1107 17:08:52.582681 165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0: (1.768236997s)
I1107 17:08:52.582691 165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
I1107 17:08:52.582637 165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.768194557s)
I1107 17:08:52.582747 165743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
I1107 17:08:52.582772 165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6: (1.768339669s)
I1107 17:08:52.582798 165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6
I1107 17:08:52.582748 165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
I1107 17:08:52.582749 165743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
I1107 17:08:52.582829 165743 ssh_runner.go:235] Completed: which crictl: (1.768411501s)
I1107 17:08:52.582855 165743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
I1107 17:08:52.582878 165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6
I1107 17:08:52.585359 165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6: (1.770910623s)
I1107 17:08:52.585380 165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6
I1107 17:08:52.585416 165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7: (1.771036539s)
I1107 17:08:52.585438 165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
I1107 17:08:52.585502 165743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
I1107 17:08:52.585583 165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6: (1.771118502s)
I1107 17:08:52.585599 165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6
I1107 17:08:52.587242 165743 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
I1107 17:08:52.587261 165743 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
I1107 17:08:52.587294 165743 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
I1107 17:08:52.676919 165743 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
I1107 17:08:52.677014 165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6
I1107 17:08:52.677049 165743 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
I1107 17:08:52.677110 165743 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
I1107 17:09:00.039059 165743 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (7.451733367s)
I1107 17:09:00.039096 165743 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
I1107 17:09:00.039139 165743 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.8.6
I1107 17:09:00.039203 165743 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
I1107 17:09:01.824108 165743 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6: (1.784848281s)
I1107 17:09:01.824150 165743 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
I1107 17:09:01.824181 165743 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I1107 17:09:01.824223 165743 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
I1107 17:09:02.321028 165743 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I1107 17:09:02.321067 165743 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.7
I1107 17:09:02.321122 165743 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
I1107 17:09:02.521066 165743 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
I1107 17:09:02.521129 165743 cache_images.go:92] LoadImages completed in 13.012944956s
W1107 17:09:02.521265 165743 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6: no such file or directory
X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6: no such file or directory
I1107 17:09:02.521313 165743 ssh_runner.go:195] Run: sudo crictl info
I1107 17:09:02.549803 165743 cni.go:95] Creating CNI manager for ""
I1107 17:09:02.549843 165743 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I1107 17:09:02.549862 165743 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1107 17:09:02.549885 165743 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-170735 NodeName:test-preload-170735 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I1107 17:09:02.550126 165743 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "test-preload-170735"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1107 17:09:02.550287 165743 kubeadm.go:962] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-170735 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.6 ClusterName:test-preload-170735 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1107 17:09:02.550387 165743 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
I1107 17:09:02.558461 165743 binaries.go:44] Found k8s binaries, skipping transfer
I1107 17:09:02.558534 165743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1107 17:09:02.609209 165743 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (512 bytes)
I1107 17:09:02.622855 165743 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1107 17:09:02.636362 165743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
I1107 17:09:02.650109 165743 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I1107 17:09:02.653949 165743 certs.go:54] Setting up /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735 for IP: 192.168.67.2
I1107 17:09:02.654100 165743 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15310-44720/.minikube/ca.key
I1107 17:09:02.654166 165743 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15310-44720/.minikube/proxy-client-ca.key
I1107 17:09:02.654255 165743 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/client.key
I1107 17:09:02.654354 165743 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/apiserver.key.c7fa3a9e
I1107 17:09:02.654418 165743 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/proxy-client.key
I1107 17:09:02.654554 165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/51176.pem (1338 bytes)
W1107 17:09:02.654595 165743 certs.go:384] ignoring /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/51176_empty.pem, impossibly tiny 0 bytes
I1107 17:09:02.654613 165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca-key.pem (1679 bytes)
I1107 17:09:02.654657 165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem (1082 bytes)
I1107 17:09:02.654702 165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/cert.pem (1123 bytes)
I1107 17:09:02.654738 165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/key.pem (1679 bytes)
I1107 17:09:02.654791 165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem (1708 bytes)
I1107 17:09:02.655574 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1107 17:09:02.703678 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1107 17:09:02.723409 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1107 17:09:02.742737 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1107 17:09:02.763001 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1107 17:09:02.818366 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1107 17:09:02.839767 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1107 17:09:02.861717 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1107 17:09:02.910886 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/certs/51176.pem --> /usr/share/ca-certificates/51176.pem (1338 bytes)
I1107 17:09:02.931102 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem --> /usr/share/ca-certificates/511762.pem (1708 bytes)
I1107 17:09:02.951804 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1107 17:09:03.011717 165743 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1107 17:09:03.027317 165743 ssh_runner.go:195] Run: openssl version
I1107 17:09:03.032867 165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1107 17:09:03.041130 165743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1107 17:09:03.044672 165743 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov 7 16:46 /usr/share/ca-certificates/minikubeCA.pem
I1107 17:09:03.044721 165743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1107 17:09:03.050588 165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1107 17:09:03.105632 165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/51176.pem && ln -fs /usr/share/ca-certificates/51176.pem /etc/ssl/certs/51176.pem"
I1107 17:09:03.114215 165743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/51176.pem
I1107 17:09:03.117586 165743 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov 7 16:50 /usr/share/ca-certificates/51176.pem
I1107 17:09:03.117644 165743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/51176.pem
I1107 17:09:03.123353 165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/51176.pem /etc/ssl/certs/51391683.0"
I1107 17:09:03.131017 165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/511762.pem && ln -fs /usr/share/ca-certificates/511762.pem /etc/ssl/certs/511762.pem"
I1107 17:09:03.139872 165743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/511762.pem
I1107 17:09:03.143694 165743 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov 7 16:50 /usr/share/ca-certificates/511762.pem
I1107 17:09:03.143738 165743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/511762.pem
I1107 17:09:03.149761 165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/511762.pem /etc/ssl/certs/3ec20f2e.0"
I1107 17:09:03.209904 165743 kubeadm.go:396] StartCluster: {Name:test-preload-170735 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-170735 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1107 17:09:03.210035 165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1107 17:09:03.210092 165743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1107 17:09:03.240135 165743 cri.go:87] found id: "bbc8111955475e273c589af8ebe48cc22947c192b9004953ca28f3abd9af9206"
I1107 17:09:03.240172 165743 cri.go:87] found id: "0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834"
I1107 17:09:03.240181 165743 cri.go:87] found id: ""
I1107 17:09:03.240225 165743 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I1107 17:09:03.327373 165743 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"0314116d648233d6c1e60ed5a556a815105434479c9a17285a7cd8dc23953bc5","pid":1641,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0314116d648233d6c1e60ed5a556a815105434479c9a17285a7cd8dc23953bc5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0314116d648233d6c1e60ed5a556a815105434479c9a17285a7cd8dc23953bc5/rootfs","created":"2022-11-07T17:07:57.155832841Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri.sandbox-id":"705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86","io.kubernetes.cri.sandbox-name":"etcd-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa","pid":3510,"status":"running","bundle":"/ru
n/containerd/io.containerd.runtime.v2.task/k8s.io/0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa/rootfs","created":"2022-11-07T17:08:53.110308717Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-lv445_fcbfbd08-498e-4a9c-8d36-0d45cbd312bd","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-lv445","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834","pid":3658,"status":"running","bundle":"/run/containerd/
io.containerd.runtime.v2.task/k8s.io/0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834/rootfs","created":"2022-11-07T17:08:54.456156833Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4","pid":2180,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/250fd
604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4/rootfs","created":"2022-11-07T17:08:16.602156421Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-fh9w9_eca84e65-57b5-4cc9-b42a-0f991c91ffe7","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-fh9w9","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2d4d536c9a0a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d","pid":3521,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d4d536c9a0a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d4d536c9a0
a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d/rootfs","created":"2022-11-07T17:08:53.110915142Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"2d4d536c9a0a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-170735_11f8c11ccd07f3d1eb49f811a3342256","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95","pid":1505,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95","rootfs":"/run/containerd/io.containerd.runtime.v2.ta
sk/k8s.io/37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95/rootfs","created":"2022-11-07T17:07:56.942370634Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-170735_11f8c11ccd07f3d1eb49f811a3342256","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049","pid":3522,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049","rootfs":"/run/containerd/io.con
tainerd.runtime.v2.task/k8s.io/3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049/rootfs","created":"2022-11-07T17:08:53.027578577Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-170735_809d9df5626cf37e910052830f1a68d3","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623","pid":2181,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba6
8a623","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623/rootfs","created":"2022-11-07T17:08:16.461925695Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-lv445_fcbfbd08-498e-4a9c-8d36-0d45cbd312bd","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-lv445","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"46a2f3bebabe1b18bc1bb0a2815efd01f85119114dd473e67a6ef5ed94353067","pid":2431,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46a2f3bebabe1b18bc1bb0a2815efd01f85119114dd473e67a6ef5ed94353067","rootf
s":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46a2f3bebabe1b18bc1bb0a2815efd01f85119114dd473e67a6ef5ed94353067/rootfs","created":"2022-11-07T17:08:19.802116354Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20221004-44d545d1","io.kubernetes.cri.sandbox-id":"250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4","io.kubernetes.cri.sandbox-name":"kindnet-fh9w9","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5bee5419fd26f0844e516ee32486faead6e58bf4501faaf52d7a05e85ca46114","pid":2624,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5bee5419fd26f0844e516ee32486faead6e58bf4501faaf52d7a05e85ca46114","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5bee5419fd26f0844e516ee32486faead6e58bf4501faaf52d7a05e85ca46114/rootfs","created":"2022-11-07T17:08:24.414118976Z","annotati
ons":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e","pid":3576,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e/rootfs","created":"2022-11-07T17:08:53.22282877Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-
shares":"102","io.kubernetes.cri.sandbox-id":"5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-46n4z_0bb47afc-9c44-48b3-8dd4-966ed2608a7a","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-46n4z","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83","pid":3544,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83/rootfs","created":"2022-11-07T17:08:53.114873995Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri
.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-170735_62ea0ae7f0dd287c41e3fc4d83f43bcc","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86","pid":1509,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86/rootfs","created":"2022-11-07T17:07:56.942483078Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cr
i.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-170735_62ea0ae7f0dd287c41e3fc4d83f43bcc","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da","pid":1511,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da/rootfs","created":"2022-11-07T17:07:56.942394808Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.c
ri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-170735_809d9df5626cf37e910052830f1a68d3","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90","pid":2564,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90/rootfs","created":"2022-11-07T17:08:24.30208689Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.s
andbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_c43d0d64-f743-4627-894e-be6b8af2e64d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7ebafae905092334183576510f909bae93bd084561b7b4a27b2c106d54be85e8","pid":2247,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ebafae905092334183576510f909bae93bd084561b7b4a27b2c106d54be85e8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ebafae905092334183576510f909bae93bd084561b7b4a27b2c106d54be85e8/rootfs","created":"2022-11-07T17:08:16.619320417Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-
name":"k8s.gcr.io/kube-proxy:v1.24.4","io.kubernetes.cri.sandbox-id":"415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623","io.kubernetes.cri.sandbox-name":"kube-proxy-lv445","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9af9bcc1e7bfa1fcebd60598922c92115f437177f9d295842f96abde73b0517a","pid":1639,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9af9bcc1e7bfa1fcebd60598922c92115f437177f9d295842f96abde73b0517a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9af9bcc1e7bfa1fcebd60598922c92115f437177f9d295842f96abde73b0517a/rootfs","created":"2022-11-07T17:07:57.155960118Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-apiserver:v1.24.4","io.kubernetes.cri.sandbox-id":"37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95","io.kubernetes.cri.sandbox-name":"kube-apiserver-tes
t-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593","pid":2565,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593/rootfs","created":"2022-11-07T17:08:24.301147925Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-46n4z_0bb47afc-9c44-48b3-8dd4-966ed2608a7a","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-na
me":"coredns-6d4b75cb6d-46n4z","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74","pid":1510,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74/rootfs","created":"2022-11-07T17:07:56.942447268Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-170735_d3532015a9097ea10a4280936fe474ca","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri
.sandbox-name":"kube-scheduler-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b8770899fd0b74b30df43ba6cfadcb7b183dddc9100af7c7bc1df4a3e6065247","pid":2623,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8770899fd0b74b30df43ba6cfadcb7b183dddc9100af7c7bc1df4a3e6065247","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8770899fd0b74b30df43ba6cfadcb7b183dddc9100af7c7bc1df4a3e6065247/rootfs","created":"2022-11-07T17:08:24.411783378Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-46n4z","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d596e727cf71ed6c642b598c327f52552f
ba8f973625380adcf054e3f5d2d1c6","pid":1642,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d596e727cf71ed6c642b598c327f52552fba8f973625380adcf054e3f5d2d1c6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d596e727cf71ed6c642b598c327f52552fba8f973625380adcf054e3f5d2d1c6/rootfs","created":"2022-11-07T17:07:57.156067666Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-controller-manager:v1.24.4","io.kubernetes.cri.sandbox-id":"76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ddefa3ac5399737dff7110fcaf80962425381646c55d72cc70f71a263df0342a","pid":3553,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddefa3ac5399737dff
7110fcaf80962425381646c55d72cc70f71a263df0342a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddefa3ac5399737dff7110fcaf80962425381646c55d72cc70f71a263df0342a/rootfs","created":"2022-11-07T17:08:53.113518089Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ddefa3ac5399737dff7110fcaf80962425381646c55d72cc70f71a263df0342a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-170735_d3532015a9097ea10a4280936fe474ca","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ea6df2fe58eeb1388803a45c064d70377759e51f38f946f4ea3630da79dc69a6","pid":1640,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.
io/ea6df2fe58eeb1388803a45c064d70377759e51f38f946f4ea3630da79dc69a6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea6df2fe58eeb1388803a45c064d70377759e51f38f946f4ea3630da79dc69a6/rootfs","created":"2022-11-07T17:07:57.156161632Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-scheduler:v1.24.4","io.kubernetes.cri.sandbox-id":"a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6","pid":3562,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f82c54e5c1fb4c8247a99e96a8cf288d1c50b2
7e3b90db040e3d9988132681f6/rootfs","created":"2022-11-07T17:08:53.114973557Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_c43d0d64-f743-4627-894e-be6b8af2e64d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fbde5a10025abb05664ed","pid":3518,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fbde5a10025abb05664ed","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fb
de5a10025abb05664ed/rootfs","created":"2022-11-07T17:08:53.111272121Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fbde5a10025abb05664ed","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-fh9w9_eca84e65-57b5-4cc9-b42a-0f991c91ffe7","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-fh9w9","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"}]
I1107 17:09:03.327859 165743 cri.go:124] list returned 25 containers
I1107 17:09:03.327880 165743 cri.go:127] container: {ID:0314116d648233d6c1e60ed5a556a815105434479c9a17285a7cd8dc23953bc5 Status:running}
I1107 17:09:03.327898 165743 cri.go:129] skipping 0314116d648233d6c1e60ed5a556a815105434479c9a17285a7cd8dc23953bc5 - not in ps
I1107 17:09:03.327906 165743 cri.go:127] container: {ID:0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa Status:running}
I1107 17:09:03.327915 165743 cri.go:129] skipping 0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa - not in ps
I1107 17:09:03.327927 165743 cri.go:127] container: {ID:0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834 Status:running}
I1107 17:09:03.327939 165743 cri.go:133] skipping {0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834 running}: state = "running", want "paused"
I1107 17:09:03.327954 165743 cri.go:127] container: {ID:250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4 Status:running}
I1107 17:09:03.327966 165743 cri.go:129] skipping 250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4 - not in ps
I1107 17:09:03.327973 165743 cri.go:127] container: {ID:2d4d536c9a0a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d Status:running}
I1107 17:09:03.327986 165743 cri.go:129] skipping 2d4d536c9a0a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d - not in ps
I1107 17:09:03.328004 165743 cri.go:127] container: {ID:37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95 Status:running}
I1107 17:09:03.328018 165743 cri.go:129] skipping 37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95 - not in ps
I1107 17:09:03.328029 165743 cri.go:127] container: {ID:3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049 Status:running}
I1107 17:09:03.328041 165743 cri.go:129] skipping 3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049 - not in ps
I1107 17:09:03.328047 165743 cri.go:127] container: {ID:415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623 Status:running}
I1107 17:09:03.328060 165743 cri.go:129] skipping 415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623 - not in ps
I1107 17:09:03.328071 165743 cri.go:127] container: {ID:46a2f3bebabe1b18bc1bb0a2815efd01f85119114dd473e67a6ef5ed94353067 Status:running}
I1107 17:09:03.328082 165743 cri.go:129] skipping 46a2f3bebabe1b18bc1bb0a2815efd01f85119114dd473e67a6ef5ed94353067 - not in ps
I1107 17:09:03.328092 165743 cri.go:127] container: {ID:5bee5419fd26f0844e516ee32486faead6e58bf4501faaf52d7a05e85ca46114 Status:running}
I1107 17:09:03.328100 165743 cri.go:129] skipping 5bee5419fd26f0844e516ee32486faead6e58bf4501faaf52d7a05e85ca46114 - not in ps
I1107 17:09:03.328107 165743 cri.go:127] container: {ID:5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e Status:running}
I1107 17:09:03.328121 165743 cri.go:129] skipping 5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e - not in ps
I1107 17:09:03.328132 165743 cri.go:127] container: {ID:5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83 Status:running}
I1107 17:09:03.328144 165743 cri.go:129] skipping 5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83 - not in ps
I1107 17:09:03.328150 165743 cri.go:127] container: {ID:705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86 Status:running}
I1107 17:09:03.328169 165743 cri.go:129] skipping 705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86 - not in ps
I1107 17:09:03.328181 165743 cri.go:127] container: {ID:76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da Status:running}
I1107 17:09:03.328188 165743 cri.go:129] skipping 76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da - not in ps
I1107 17:09:03.328199 165743 cri.go:127] container: {ID:7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90 Status:running}
I1107 17:09:03.328209 165743 cri.go:129] skipping 7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90 - not in ps
I1107 17:09:03.328214 165743 cri.go:127] container: {ID:7ebafae905092334183576510f909bae93bd084561b7b4a27b2c106d54be85e8 Status:running}
I1107 17:09:03.328223 165743 cri.go:129] skipping 7ebafae905092334183576510f909bae93bd084561b7b4a27b2c106d54be85e8 - not in ps
I1107 17:09:03.328229 165743 cri.go:127] container: {ID:9af9bcc1e7bfa1fcebd60598922c92115f437177f9d295842f96abde73b0517a Status:running}
I1107 17:09:03.328241 165743 cri.go:129] skipping 9af9bcc1e7bfa1fcebd60598922c92115f437177f9d295842f96abde73b0517a - not in ps
I1107 17:09:03.328248 165743 cri.go:127] container: {ID:a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593 Status:running}
I1107 17:09:03.328263 165743 cri.go:129] skipping a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593 - not in ps
I1107 17:09:03.328275 165743 cri.go:127] container: {ID:a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74 Status:running}
I1107 17:09:03.328287 165743 cri.go:129] skipping a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74 - not in ps
I1107 17:09:03.328297 165743 cri.go:127] container: {ID:b8770899fd0b74b30df43ba6cfadcb7b183dddc9100af7c7bc1df4a3e6065247 Status:running}
I1107 17:09:03.328308 165743 cri.go:129] skipping b8770899fd0b74b30df43ba6cfadcb7b183dddc9100af7c7bc1df4a3e6065247 - not in ps
I1107 17:09:03.328318 165743 cri.go:127] container: {ID:d596e727cf71ed6c642b598c327f52552fba8f973625380adcf054e3f5d2d1c6 Status:running}
I1107 17:09:03.328326 165743 cri.go:129] skipping d596e727cf71ed6c642b598c327f52552fba8f973625380adcf054e3f5d2d1c6 - not in ps
I1107 17:09:03.328337 165743 cri.go:127] container: {ID:ddefa3ac5399737dff7110fcaf80962425381646c55d72cc70f71a263df0342a Status:running}
I1107 17:09:03.328349 165743 cri.go:129] skipping ddefa3ac5399737dff7110fcaf80962425381646c55d72cc70f71a263df0342a - not in ps
I1107 17:09:03.328358 165743 cri.go:127] container: {ID:ea6df2fe58eeb1388803a45c064d70377759e51f38f946f4ea3630da79dc69a6 Status:running}
I1107 17:09:03.328370 165743 cri.go:129] skipping ea6df2fe58eeb1388803a45c064d70377759e51f38f946f4ea3630da79dc69a6 - not in ps
I1107 17:09:03.328381 165743 cri.go:127] container: {ID:f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6 Status:running}
I1107 17:09:03.328391 165743 cri.go:129] skipping f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6 - not in ps
I1107 17:09:03.328404 165743 cri.go:127] container: {ID:f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fbde5a10025abb05664ed Status:running}
I1107 17:09:03.328415 165743 cri.go:129] skipping f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fbde5a10025abb05664ed - not in ps
I1107 17:09:03.328459 165743 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1107 17:09:03.336550 165743 kubeadm.go:411] found existing configuration files, will attempt cluster restart
I1107 17:09:03.336573 165743 kubeadm.go:627] restartCluster start
I1107 17:09:03.336628 165743 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1107 17:09:03.344380 165743 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1107 17:09:03.345034 165743 kubeconfig.go:92] found "test-preload-170735" server: "https://192.168.67.2:8443"
I1107 17:09:03.345729 165743 kapi.go:59] client config for test-preload-170735: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/client.crt", KeyFile:"/home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/client.key", CAFile:"/home/jenkins/minikube-integration/15310-44720/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1107 17:09:03.346403 165743 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1107 17:09:03.402000 165743 kubeadm.go:594] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml 2022-11-07 17:07:52.875254223 +0000
+++ /var/tmp/minikube/kubeadm.yaml.new 2022-11-07 17:09:02.646277681 +0000
@@ -38,7 +38,7 @@
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
-kubernetesVersion: v1.24.4
+kubernetesVersion: v1.24.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
-- /stdout --
I1107 17:09:03.402024 165743 kubeadm.go:1114] stopping kube-system containers ...
I1107 17:09:03.402039 165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I1107 17:09:03.402098 165743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1107 17:09:03.431844 165743 cri.go:87] found id: "bbc8111955475e273c589af8ebe48cc22947c192b9004953ca28f3abd9af9206"
I1107 17:09:03.431899 165743 cri.go:87] found id: "0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834"
I1107 17:09:03.431910 165743 cri.go:87] found id: ""
I1107 17:09:03.431917 165743 cri.go:232] Stopping containers: [bbc8111955475e273c589af8ebe48cc22947c192b9004953ca28f3abd9af9206 0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834]
I1107 17:09:03.431974 165743 ssh_runner.go:195] Run: which crictl
I1107 17:09:03.436330 165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop bbc8111955475e273c589af8ebe48cc22947c192b9004953ca28f3abd9af9206 0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834
I1107 17:09:03.742156 165743 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I1107 17:09:03.809643 165743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1107 17:09:03.817012 165743 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5643 Nov 7 17:07 /etc/kubernetes/admin.conf
-rw------- 1 root root 5656 Nov 7 17:07 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2015 Nov 7 17:08 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5604 Nov 7 17:07 /etc/kubernetes/scheduler.conf
I1107 17:09:03.817084 165743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1107 17:09:03.823720 165743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1107 17:09:03.830244 165743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1107 17:09:03.836663 165743 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I1107 17:09:03.836710 165743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1107 17:09:03.842795 165743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1107 17:09:03.849520 165743 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I1107 17:09:03.849574 165743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1107 17:09:03.856003 165743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1107 17:09:03.862911 165743 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I1107 17:09:03.862935 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I1107 17:09:04.002289 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I1107 17:09:05.237323 165743 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.234999973s)
I1107 17:09:05.237359 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I1107 17:09:05.449035 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I1107 17:09:05.504177 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I1107 17:09:05.621639 165743 api_server.go:51] waiting for apiserver process to appear ...
I1107 17:09:05.621702 165743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1107 17:09:05.633566 165743 api_server.go:71] duration metric: took 11.935157ms to wait for apiserver process to appear ...
I1107 17:09:05.633600 165743 api_server.go:87] waiting for apiserver healthz status ...
I1107 17:09:05.633614 165743 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I1107 17:09:05.639393 165743 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
ok
I1107 17:09:05.645496 165743 api_server.go:140] control plane version: v1.24.4
W1107 17:09:05.645524 165743 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1107 17:09:06.147196 165743 api_server.go:140] control plane version: v1.24.4
W1107 17:09:06.147277 165743 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1107 17:09:06.646924 165743 api_server.go:140] control plane version: v1.24.4
W1107 17:09:06.646957 165743 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1107 17:09:07.147645 165743 api_server.go:140] control plane version: v1.24.4
W1107 17:09:07.147679 165743 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1107 17:09:07.647341 165743 api_server.go:140] control plane version: v1.24.4
W1107 17:09:07.647372 165743 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
W1107 17:09:08.146168 165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1107 17:09:08.646046 165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1107 17:09:09.147144 165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1107 17:09:09.646092 165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1107 17:09:10.147021 165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1107 17:09:10.646973 165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1107 17:09:11.146883 165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
I1107 17:09:13.915841 165743 api_server.go:140] control plane version: v1.24.6
I1107 17:09:13.915921 165743 api_server.go:130] duration metric: took 8.282312967s to wait for apiserver health ...
I1107 17:09:13.915945 165743 cni.go:95] Creating CNI manager for ""
I1107 17:09:13.915963 165743 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I1107 17:09:13.918212 165743 out.go:177] * Configuring CNI (Container Networking Interface) ...
I1107 17:09:13.919726 165743 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I1107 17:09:13.924616 165743 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.6/kubectl ...
I1107 17:09:13.924640 165743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I1107 17:09:14.021282 165743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1107 17:09:15.124609 165743 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.103271829s)
I1107 17:09:15.124658 165743 system_pods.go:43] waiting for kube-system pods to appear ...
I1107 17:09:15.134287 165743 system_pods.go:59] 8 kube-system pods found
I1107 17:09:15.134343 165743 system_pods.go:61] "coredns-6d4b75cb6d-46n4z" [0bb47afc-9c44-48b3-8dd4-966ed2608a7a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1107 17:09:15.134355 165743 system_pods.go:61] "etcd-test-preload-170735" [bf983595-48b0-4ad3-948e-264fe4654767] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1107 17:09:15.134365 165743 system_pods.go:61] "kindnet-fh9w9" [eca84e65-57b5-4cc9-b42a-0f991c91ffe7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I1107 17:09:15.134375 165743 system_pods.go:61] "kube-apiserver-test-preload-170735" [6005f40b-0034-46af-ac9b-8b7945ea8996] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1107 17:09:15.134382 165743 system_pods.go:61] "kube-controller-manager-test-preload-170735" [05e955ad-7fc3-4874-97a5-7ba8ee0faf37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1107 17:09:15.134396 165743 system_pods.go:61] "kube-proxy-lv445" [fcbfbd08-498e-4a9c-8d36-0d45cbd312bd] Running
I1107 17:09:15.134404 165743 system_pods.go:61] "kube-scheduler-test-preload-170735" [102796b5-9e64-4c55-9ceb-c091fb0faf8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1107 17:09:15.134416 165743 system_pods.go:61] "storage-provisioner" [c43d0d64-f743-4627-894e-be6b8af2e64d] Running
I1107 17:09:15.134425 165743 system_pods.go:74] duration metric: took 9.760603ms to wait for pod list to return data ...
I1107 17:09:15.134434 165743 node_conditions.go:102] verifying NodePressure condition ...
I1107 17:09:15.136728 165743 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1107 17:09:15.136759 165743 node_conditions.go:123] node cpu capacity is 8
I1107 17:09:15.136770 165743 node_conditions.go:105] duration metric: took 2.331494ms to run NodePressure ...
I1107 17:09:15.136786 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I1107 17:09:15.388874 165743 kubeadm.go:763] waiting for restarted kubelet to initialise ...
I1107 17:09:15.392441 165743 kubeadm.go:778] kubelet initialised
I1107 17:09:15.392464 165743 kubeadm.go:779] duration metric: took 3.557352ms waiting for restarted kubelet to initialise ...
I1107 17:09:15.392473 165743 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1107 17:09:15.396706 165743 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace to be "Ready" ...
I1107 17:09:17.406088 165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:19.407719 165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:21.906077 165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:23.906170 165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:25.906482 165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:28.406244 165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:29.906673 165743 pod_ready.go:92] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"True"
I1107 17:09:29.906708 165743 pod_ready.go:81] duration metric: took 14.509975616s waiting for pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace to be "Ready" ...
I1107 17:09:29.906722 165743 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-170735" in "kube-system" namespace to be "Ready" ...
I1107 17:09:31.916347 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:33.916395 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:35.917695 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:38.416611 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:40.417341 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:42.917030 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:44.917463 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:47.417821 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:49.916882 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:52.417257 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:54.916575 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:56.916604 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:58.917108 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:01.417633 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:03.917219 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:06.416808 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:08.917079 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:11.417333 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:13.417408 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:15.917166 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:18.415994 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:20.416647 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:22.917094 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:24.919800 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:27.416902 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:29.417714 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:31.917189 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:34.417311 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:36.916350 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:38.917416 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:41.416812 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:43.417080 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:45.916487 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:47.917346 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:50.416654 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:52.917124 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:55.416999 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:57.417311 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:59.916704 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:01.919070 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:04.416758 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:06.416952 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:08.916903 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:11.416562 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:13.417202 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:15.917270 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:18.416813 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:20.917286 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:23.416732 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:25.417405 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:27.916529 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:29.916950 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:32.417231 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:34.916940 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:37.416873 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:39.417294 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:41.916140 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:43.916375 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:45.916655 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:47.916977 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:50.416682 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:52.417097 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:54.916635 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:57.416816 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:59.916263 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:01.916974 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:03.917239 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:06.416793 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:08.417072 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:10.916349 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:13.416821 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:15.916263 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:17.916820 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:19.917768 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:22.416608 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:24.417657 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:26.916718 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:28.916894 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:31.417519 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:33.418814 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:35.916938 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:38.416980 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:40.916839 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:42.917145 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:44.917492 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:47.417047 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:49.916565 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:51.916916 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:54.416695 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:56.419030 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:58.916323 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:00.917565 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:03.416572 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:05.416612 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:07.917363 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:10.416406 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:12.416604 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:14.916267 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:16.916810 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:19.417492 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:21.916818 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:23.917104 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:26.416941 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:28.916283 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:29.912039 165743 pod_ready.go:81] duration metric: took 4m0.005300509s waiting for pod "etcd-test-preload-170735" in "kube-system" namespace to be "Ready" ...
E1107 17:13:29.912067 165743 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "etcd-test-preload-170735" in "kube-system" namespace to be "Ready" (will not retry!)
I1107 17:13:29.912099 165743 pod_ready.go:38] duration metric: took 4m14.519613554s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1107 17:13:29.912140 165743 kubeadm.go:631] restartCluster took 4m26.575555046s
W1107 17:13:29.912302 165743 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
I1107 17:13:29.912357 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1107 17:13:31.585704 165743 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.673321164s)
I1107 17:13:31.585763 165743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1107 17:13:31.595197 165743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1107 17:13:31.601977 165743 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1107 17:13:31.602022 165743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1107 17:13:31.608611 165743 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1107 17:13:31.608656 165743 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1107 17:13:31.641698 165743 kubeadm.go:317] W1107 17:13:31.640965 6500 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I1107 17:13:31.673782 165743 kubeadm.go:317] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
I1107 17:13:31.734442 165743 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1107 17:13:31.734566 165743 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
I1107 17:13:31.734625 165743 kubeadm.go:317] [ERROR Port-2379]: Port 2379 is in use
I1107 17:13:31.734689 165743 kubeadm.go:317] [ERROR Port-2380]: Port 2380 is in use
I1107 17:13:31.734827 165743 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
I1107 17:13:31.734917 165743 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
I1107 17:13:31.736598 165743 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
I1107 17:13:31.736666 165743 kubeadm.go:317] [preflight] Running pre-flight checks
I1107 17:13:31.736791 165743 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
I1107 17:13:31.736841 165743 kubeadm.go:317] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
I1107 17:13:31.736892 165743 kubeadm.go:317] [0;37mOS[0m: [0;32mLinux[0m
I1107 17:13:31.736952 165743 kubeadm.go:317] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1107 17:13:31.737020 165743 kubeadm.go:317] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1107 17:13:31.737089 165743 kubeadm.go:317] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1107 17:13:31.737161 165743 kubeadm.go:317] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1107 17:13:31.737230 165743 kubeadm.go:317] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1107 17:13:31.737297 165743 kubeadm.go:317] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1107 17:13:31.737366 165743 kubeadm.go:317] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1107 17:13:31.737432 165743 kubeadm.go:317] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1107 17:13:31.737511 165743 kubeadm.go:317] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
W1107 17:13:31.737713 165743 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1107 17:13:31.640965 6500 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1107 17:13:31.640965 6500 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
I1107 17:13:31.737760 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1107 17:13:32.054639 165743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1107 17:13:32.063813 165743 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1107 17:13:32.063875 165743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1107 17:13:32.070411 165743 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1107 17:13:32.070456 165743 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1107 17:13:32.107519 165743 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
I1107 17:13:32.107565 165743 kubeadm.go:317] [preflight] Running pre-flight checks
I1107 17:13:32.134497 165743 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
I1107 17:13:32.134580 165743 kubeadm.go:317] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
I1107 17:13:32.134633 165743 kubeadm.go:317] [0;37mOS[0m: [0;32mLinux[0m
I1107 17:13:32.134687 165743 kubeadm.go:317] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1107 17:13:32.134791 165743 kubeadm.go:317] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1107 17:13:32.134877 165743 kubeadm.go:317] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1107 17:13:32.134944 165743 kubeadm.go:317] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1107 17:13:32.135016 165743 kubeadm.go:317] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1107 17:13:32.135087 165743 kubeadm.go:317] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1107 17:13:32.135156 165743 kubeadm.go:317] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1107 17:13:32.135221 165743 kubeadm.go:317] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1107 17:13:32.135314 165743 kubeadm.go:317] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1107 17:13:32.196691 165743 kubeadm.go:317] W1107 17:13:32.102889 6771 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I1107 17:13:32.196897 165743 kubeadm.go:317] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
I1107 17:13:32.197035 165743 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1107 17:13:32.197117 165743 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
I1107 17:13:32.197155 165743 kubeadm.go:317] [ERROR Port-2379]: Port 2379 is in use
I1107 17:13:32.197197 165743 kubeadm.go:317] [ERROR Port-2380]: Port 2380 is in use
I1107 17:13:32.197292 165743 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
I1107 17:13:32.197352 165743 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
I1107 17:13:32.197439 165743 kubeadm.go:398] StartCluster complete in 4m28.987546075s
I1107 17:13:32.197484 165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1107 17:13:32.197525 165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1107 17:13:32.220007 165743 cri.go:87] found id: ""
I1107 17:13:32.220032 165743 logs.go:274] 0 containers: []
W1107 17:13:32.220040 165743 logs.go:276] No container was found matching "kube-apiserver"
I1107 17:13:32.220053 165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1107 17:13:32.220102 165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1107 17:13:32.242014 165743 cri.go:87] found id: ""
I1107 17:13:32.242043 165743 logs.go:274] 0 containers: []
W1107 17:13:32.242053 165743 logs.go:276] No container was found matching "etcd"
I1107 17:13:32.242066 165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1107 17:13:32.242112 165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1107 17:13:32.262942 165743 cri.go:87] found id: ""
I1107 17:13:32.262979 165743 logs.go:274] 0 containers: []
W1107 17:13:32.262988 165743 logs.go:276] No container was found matching "coredns"
I1107 17:13:32.262995 165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1107 17:13:32.263034 165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1107 17:13:32.284464 165743 cri.go:87] found id: ""
I1107 17:13:32.284488 165743 logs.go:274] 0 containers: []
W1107 17:13:32.284494 165743 logs.go:276] No container was found matching "kube-scheduler"
I1107 17:13:32.284501 165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1107 17:13:32.284552 165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1107 17:13:32.307214 165743 cri.go:87] found id: ""
I1107 17:13:32.307243 165743 logs.go:274] 0 containers: []
W1107 17:13:32.307252 165743 logs.go:276] No container was found matching "kube-proxy"
I1107 17:13:32.307260 165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1107 17:13:32.307310 165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1107 17:13:32.329151 165743 cri.go:87] found id: ""
I1107 17:13:32.329180 165743 logs.go:274] 0 containers: []
W1107 17:13:32.329196 165743 logs.go:276] No container was found matching "kubernetes-dashboard"
I1107 17:13:32.329205 165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1107 17:13:32.329257 165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1107 17:13:32.350599 165743 cri.go:87] found id: ""
I1107 17:13:32.350623 165743 logs.go:274] 0 containers: []
W1107 17:13:32.350629 165743 logs.go:276] No container was found matching "storage-provisioner"
I1107 17:13:32.350635 165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1107 17:13:32.350673 165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1107 17:13:32.372494 165743 cri.go:87] found id: ""
I1107 17:13:32.372522 165743 logs.go:274] 0 containers: []
W1107 17:13:32.372532 165743 logs.go:276] No container was found matching "kube-controller-manager"
I1107 17:13:32.372545 165743 logs.go:123] Gathering logs for kubelet ...
I1107 17:13:32.372558 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1107 17:13:32.435840 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.926231 4309 projected.go:192] Error preparing data for projected volume kube-api-access-l9w87 for pod kube-system/kindnet-fh9w9: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.436259 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.926837 4309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eca84e65-57b5-4cc9-b42a-0f991c91ffe7-kube-api-access-l9w87 podName:eca84e65-57b5-4cc9-b42a-0f991c91ffe7 nodeName:}" failed. No retries permitted until 2022-11-07 17:09:15.926808887 +0000 UTC m=+10.477882581 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l9w87" (UniqueName: "kubernetes.io/projected/eca84e65-57b5-4cc9-b42a-0f991c91ffe7-kube-api-access-l9w87") pod "kindnet-fh9w9" (UID: "eca84e65-57b5-4cc9-b42a-0f991c91ffe7") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.436411 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: W1107 17:09:13.927004 4309 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.436578 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927081 4309 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.436766 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927198 4309 projected.go:192] Error preparing data for projected volume kube-api-access-7jl9q for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.437177 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927299 4309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c43d0d64-f743-4627-894e-be6b8af2e64d-kube-api-access-7jl9q podName:c43d0d64-f743-4627-894e-be6b8af2e64d nodeName:}" failed. No retries permitted until 2022-11-07 17:09:15.927284243 +0000 UTC m=+10.478357937 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-7jl9q" (UniqueName: "kubernetes.io/projected/c43d0d64-f743-4627-894e-be6b8af2e64d-kube-api-access-7jl9q") pod "storage-provisioner" (UID: "c43d0d64-f743-4627-894e-be6b8af2e64d") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.437330 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: W1107 17:09:13.927404 4309 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.437497 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927466 4309 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.437684 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927560 4309 projected.go:192] Error preparing data for projected volume kube-api-access-6vv4c for pod kube-system/kube-proxy-lv445: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.438089 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927649 4309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fcbfbd08-498e-4a9c-8d36-0d45cbd312bd-kube-api-access-6vv4c podName:fcbfbd08-498e-4a9c-8d36-0d45cbd312bd nodeName:}" failed. No retries permitted until 2022-11-07 17:09:15.927635728 +0000 UTC m=+10.478709423 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6vv4c" (UniqueName: "kubernetes.io/projected/fcbfbd08-498e-4a9c-8d36-0d45cbd312bd-kube-api-access-6vv4c") pod "kube-proxy-lv445" (UID: "fcbfbd08-498e-4a9c-8d36-0d45cbd312bd") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.438269 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927751 4309 projected.go:192] Error preparing data for projected volume kube-api-access-qmxlx for pod kube-system/coredns-6d4b75cb6d-46n4z: failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.438700 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927842 4309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0bb47afc-9c44-48b3-8dd4-966ed2608a7a-kube-api-access-qmxlx podName:0bb47afc-9c44-48b3-8dd4-966ed2608a7a nodeName:}" failed. No retries permitted until 2022-11-07 17:09:15.927829872 +0000 UTC m=+10.478903566 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qmxlx" (UniqueName: "kubernetes.io/projected/0bb47afc-9c44-48b3-8dd4-966ed2608a7a-kube-api-access-qmxlx") pod "coredns-6d4b75cb6d-46n4z" (UID: "0bb47afc-9c44-48b3-8dd4-966ed2608a7a") : failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.438846 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: W1107 17:09:13.927954 4309 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.439007 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.928028 4309 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
I1107 17:13:32.459618 165743 logs.go:123] Gathering logs for dmesg ...
I1107 17:13:32.459642 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1107 17:13:32.475496 165743 logs.go:123] Gathering logs for describe nodes ...
I1107 17:13:32.475522 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1107 17:13:32.524048 165743 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1107 17:13:32.524077 165743 logs.go:123] Gathering logs for containerd ...
I1107 17:13:32.524091 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1107 17:13:32.579264 165743 logs.go:123] Gathering logs for container status ...
I1107 17:13:32.579299 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1107 17:13:32.605796 165743 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1107 17:13:32.102889 6771 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W1107 17:13:32.605835 165743 out.go:239] *
*
W1107 17:13:32.605973 165743 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1107 17:13:32.102889 6771 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1107 17:13:32.102889 6771 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W1107 17:13:32.606006 165743 out.go:239] *
*
W1107 17:13:32.606836 165743 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1107 17:13:32.608746 165743 out.go:177] X Problems detected in kubelet:
I1107 17:13:32.610170 165743 out.go:177] Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.926231 4309 projected.go:192] Error preparing data for projected volume kube-api-access-l9w87 for pod kube-system/kindnet-fh9w9: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
I1107 17:13:32.612470 165743 out.go:177] Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.926837 4309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eca84e65-57b5-4cc9-b42a-0f991c91ffe7-kube-api-access-l9w87 podName:eca84e65-57b5-4cc9-b42a-0f991c91ffe7 nodeName:}" failed. No retries permitted until 2022-11-07 17:09:15.926808887 +0000 UTC m=+10.477882581 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l9w87" (UniqueName: "kubernetes.io/projected/eca84e65-57b5-4cc9-b42a-0f991c91ffe7-kube-api-access-l9w87") pod "kindnet-fh9w9" (UID: "eca84e65-57b5-4cc9-b42a-0f991c91ffe7") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
I1107 17:13:32.614018 165743 out.go:177] Nov 07 17:09:13 test-preload-170735 kubelet[4309]: W1107 17:09:13.927004 4309 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
I1107 17:13:32.616027 165743 out.go:177]
W1107 17:13:32.617358 165743 out.go:239] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1107 17:13:32.102889 6771 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1107 17:13:32.102889 6771 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W1107 17:13:32.617464 165743 out.go:239] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
* Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
W1107 17:13:32.617526 165743 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/5484
* Related issue: https://github.com/kubernetes/minikube/issues/5484
I1107 17:13:32.619660 165743 out.go:177]
** /stderr **
preload_test.go:69: out/minikube-linux-amd64 start -p test-preload-170735 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --container-runtime=containerd --kubernetes-version=v1.24.6 failed: exit status 81
panic.go:522: *** TestPreload FAILED at 2022-11-07 17:13:32.664073651 +0000 UTC m=+1686.958491162
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect test-preload-170735
helpers_test.go:235: (dbg) docker inspect test-preload-170735:
-- stdout --
[
{
"Id": "562352745c30197ec8ca41bd220d69e2934fde42c536a6c6d77373c0daf0d2cb",
"Created": "2022-11-07T17:07:37.332353721Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 162554,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-11-07T17:07:37.793781997Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
"ResolvConfPath": "/var/lib/docker/containers/562352745c30197ec8ca41bd220d69e2934fde42c536a6c6d77373c0daf0d2cb/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/562352745c30197ec8ca41bd220d69e2934fde42c536a6c6d77373c0daf0d2cb/hostname",
"HostsPath": "/var/lib/docker/containers/562352745c30197ec8ca41bd220d69e2934fde42c536a6c6d77373c0daf0d2cb/hosts",
"LogPath": "/var/lib/docker/containers/562352745c30197ec8ca41bd220d69e2934fde42c536a6c6d77373c0daf0d2cb/562352745c30197ec8ca41bd220d69e2934fde42c536a6c6d77373c0daf0d2cb-json.log",
"Name": "/test-preload-170735",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"test-preload-170735:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "test-preload-170735",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/272c116f8b9e09d720cdc22e58042bf497b39a76c82e0a08d90ef1ffec7e6f7d-init/diff:/var/lib/docker/overlay2/50f34786c57872c77d74fc1e1bfc5c830eecdaaa307731f7f0968ecd4a1f1563/diff:/var/lib/docker/overlay2/7bd2077ca57b1a9d268f813d36a75f7979f1fc4acedca337c909926df0984abc/diff:/var/lib/docker/overlay2/fc584b8d731e3e1a78208322d9ad4f5e4ad9c3bcaa0f08927b91ce3c8637e0c1/diff:/var/lib/docker/overlay2/b1015b3e809f7445f186f197e10ccde2f6313a9c6860e2a15469f8efb401040d/diff:/var/lib/docker/overlay2/c333cad43ceb2005c0c4df6e6055a141624b85a82498fdd043cc72ccb83232a2/diff:/var/lib/docker/overlay2/e8adaa498090aa250a4bb91e7b41283b97dd43550202038f2ba75fb6fce1963e/diff:/var/lib/docker/overlay2/21ee34913cc32f41efb30d896d169ee516ce1865cdf9ed62125bad1d7b760ebf/diff:/var/lib/docker/overlay2/1b1e3fc8fc878d0731cfc2e081355a9d88e2832592699aec0d7fdef0b4aa2536/diff:/var/lib/docker/overlay2/4b91e729bf04aac130fb8d8bfcab139c95e0ef3f6a774013de6b68a489234ec6/diff:/var/lib/docker/overlay2/4fa234
40214db584cc2d06610d07177bcb3f52aaa6485fc6d0c5fe8830500eb8/diff:/var/lib/docker/overlay2/16748108f66ccb40a4a3b20805c0085d2865c56f7f76ef79cad24498e9ffe9d0/diff:/var/lib/docker/overlay2/ed8e95539c1661d85da89eceddad9e582c9ea46b80010c6f68d080d92c9d6b5a/diff:/var/lib/docker/overlay2/df5567a2898a9e8a1be97266503eb95798b79e37668e3073e7f439219defa1b1/diff:/var/lib/docker/overlay2/b70d157c56a0610efd610495efa704a0548753e54dc2f98f56c33b18d5bdb831/diff:/var/lib/docker/overlay2/3a1efa8a7fda429b96ee67adce9f25aa586838fff1d0e33a145074eb35f92e3b/diff:/var/lib/docker/overlay2/adec1560668aa1c06d2f672622d778fb7c7a9958814773573f9b9bd167f6c860/diff:/var/lib/docker/overlay2/b092628cb8f256d44c2fbb9ae9bccaf57d2d6209aa4f402d78256949eae7feb3/diff:/var/lib/docker/overlay2/3356cfa5fa7047a97e9c2b7cb8952bdbe042be5633202a2fb86fb78eb24d01c3/diff:/var/lib/docker/overlay2/e2eda1c37c57f4adc2cf7cba48eed6c8ffe3d2f47e31c07d647fd0597cb1aaee/diff:/var/lib/docker/overlay2/0fdab607cc4d78cb0a3fbd3041f4d6f1fabd525b190ca8fe214ce0d708a7f772/diff:/var/lib/d
ocker/overlay2/746235f8e2202d20a55b5a9fea42575d53cbce903cd7196f79b6546eb912216c/diff:/var/lib/docker/overlay2/bb90b859707e89d2d71c36f1d9688d6b09d32b9fce71c1a4caffab9be2bbb188/diff:/var/lib/docker/overlay2/10fdb9cfaf7ec1249107401913d80e6952d57412f21964005f33a1ec0edbc3bc/diff:/var/lib/docker/overlay2/c1af211c834a44cc9932c4e3a12691a9d1d7c2e14e241cb5a8b881d40534523f/diff:/var/lib/docker/overlay2/de7a70af2c1a55113b9be8a92239749d35dd866bda013a8048f5bccbc98a258d/diff:/var/lib/docker/overlay2/638ba6771779e36e94f47227270733bc19e786d6084420c1cb46c8d942883a6b/diff:/var/lib/docker/overlay2/f4e0800cf49a41c3993c1d146cd1613cacaf8996e27b642bc6359f30ae301891/diff:/var/lib/docker/overlay2/0c8275272897551e4e3bd4a403ea631396d4e226e0d1524a973391b15b868f09/diff:/var/lib/docker/overlay2/405eea0895fd24bd6bcbfa316e2f2f55186a3a8c11836a41776b7078210cef3e/diff:/var/lib/docker/overlay2/5344d9cb5a12ef430d7c5246346fdf0be30cf22430cea41ce3eeff0db5b4d629/diff:/var/lib/docker/overlay2/3a1aae2d89cdb6efed9f25c1aa5fc3b09afd34de1dea7ab15bbf250d2c1
ccaeb/diff:/var/lib/docker/overlay2/fe4503be964576b1bd1b38c1789d575ebd1d3a40807fc8fddd0d03689f815101/diff:/var/lib/docker/overlay2/cd964cc10ac76d7d224e0c14361f663890fb1aa42543b9e6aad6231ce574ab75/diff:/var/lib/docker/overlay2/d3b7495eb871dc08a1299ff6623317982ae4fcb245a496232f5ecb3c7db2f65e/diff:/var/lib/docker/overlay2/f47e602141e8a2a0110308ae1e12d31d503b156f1438454b031a4428e38d6fdf/diff:/var/lib/docker/overlay2/2fa5513e215c12fbae0f66df8f9239d68407115fc99d2d61fad469cab8e90074/diff:/var/lib/docker/overlay2/35a81d0664a9558cbb797f91f0936edc4dc40d04124e0e087016a1965853fd34/diff:/var/lib/docker/overlay2/0335b50ae6313640c86195beb2c170e6024ff55e7e7c5d4799d3fb36388be83a/diff:/var/lib/docker/overlay2/4756e235309d1e95924ec8f07ff825ebdcd7384760cb06121fcb6299bbad2e5c/diff:/var/lib/docker/overlay2/b3a9deb3bf75ddb8b41c22ba322da02c3379475903d07dd985bcef4a317a514a/diff:/var/lib/docker/overlay2/2e829bbc0c18a173f30f9904a6e0a3b3dd0b06b9f8e518ddcf6d4b8237876fb8/diff:/var/lib/docker/overlay2/eaf774e8177ba46b1b9f087012edcc4e413aa6
e302e711cb62dae1ca92ac7b5d/diff",
"MergedDir": "/var/lib/docker/overlay2/272c116f8b9e09d720cdc22e58042bf497b39a76c82e0a08d90ef1ffec7e6f7d/merged",
"UpperDir": "/var/lib/docker/overlay2/272c116f8b9e09d720cdc22e58042bf497b39a76c82e0a08d90ef1ffec7e6f7d/diff",
"WorkDir": "/var/lib/docker/overlay2/272c116f8b9e09d720cdc22e58042bf497b39a76c82e0a08d90ef1ffec7e6f7d/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "test-preload-170735",
"Source": "/var/lib/docker/volumes/test-preload-170735/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "test-preload-170735",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "test-preload-170735",
"name.minikube.sigs.k8s.io": "test-preload-170735",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "a77c3bc8f88f44237e7b8dd35cbcb2dd9891949bc305deffc304cde6b3dee027",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49277"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49276"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49273"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49275"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49274"
}
]
},
"SandboxKey": "/var/run/docker/netns/a77c3bc8f88f",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"test-preload-170735": {
"IPAMConfig": {
"IPv4Address": "192.168.67.2"
},
"Links": null,
"Aliases": [
"562352745c30",
"test-preload-170735"
],
"NetworkID": "b8cc33fdda8232591e18678d9318c33cc1cb5258fad05652407c6b9a060581e3",
"EndpointID": "e2be8473b88f2e73d93bb7868f13df004e2b698e4c50f74a2673aba5d2152fed",
"Gateway": "192.168.67.1",
"IPAddress": "192.168.67.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:43:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-170735 -n test-preload-170735
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-170735 -n test-preload-170735: exit status 2 (345.659651ms)
-- stdout --
Running
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p test-preload-170735 logs -n 25
helpers_test.go:252: TestPreload logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| ssh | multinode-165923 ssh -n | multinode-165923 | jenkins | v1.28.0 | 07 Nov 22 17:01 UTC | 07 Nov 22 17:01 UTC |
| | multinode-165923-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-165923 cp multinode-165923-m03:/home/docker/cp-test.txt | multinode-165923 | jenkins | v1.28.0 | 07 Nov 22 17:01 UTC | 07 Nov 22 17:01 UTC |
| | multinode-165923:/home/docker/cp-test_multinode-165923-m03_multinode-165923.txt | | | | | |
| ssh | multinode-165923 ssh -n | multinode-165923 | jenkins | v1.28.0 | 07 Nov 22 17:01 UTC | 07 Nov 22 17:01 UTC |
| | multinode-165923-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-165923 ssh -n multinode-165923 sudo cat | multinode-165923 | jenkins | v1.28.0 | 07 Nov 22 17:01 UTC | 07 Nov 22 17:01 UTC |
| | /home/docker/cp-test_multinode-165923-m03_multinode-165923.txt | | | | | |
| cp | multinode-165923 cp multinode-165923-m03:/home/docker/cp-test.txt | multinode-165923 | jenkins | v1.28.0 | 07 Nov 22 17:01 UTC | 07 Nov 22 17:01 UTC |
| | multinode-165923-m02:/home/docker/cp-test_multinode-165923-m03_multinode-165923-m02.txt | | | | | |
| ssh | multinode-165923 ssh -n | multinode-165923 | jenkins | v1.28.0 | 07 Nov 22 17:01 UTC | 07 Nov 22 17:01 UTC |
| | multinode-165923-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-165923 ssh -n multinode-165923-m02 sudo cat | multinode-165923 | jenkins | v1.28.0 | 07 Nov 22 17:01 UTC | 07 Nov 22 17:01 UTC |
| | /home/docker/cp-test_multinode-165923-m03_multinode-165923-m02.txt | | | | | |
| node | multinode-165923 node stop m03 | multinode-165923 | jenkins | v1.28.0 | 07 Nov 22 17:01 UTC | 07 Nov 22 17:01 UTC |
| node | multinode-165923 node start | multinode-165923 | jenkins | v1.28.0 | 07 Nov 22 17:01 UTC | 07 Nov 22 17:02 UTC |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-165923 | multinode-165923 | jenkins | v1.28.0 | 07 Nov 22 17:02 UTC | |
| stop | -p multinode-165923 | multinode-165923 | jenkins | v1.28.0 | 07 Nov 22 17:02 UTC | 07 Nov 22 17:02 UTC |
| start | -p multinode-165923 | multinode-165923 | jenkins | v1.28.0 | 07 Nov 22 17:02 UTC | 07 Nov 22 17:04 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-165923 | multinode-165923 | jenkins | v1.28.0 | 07 Nov 22 17:04 UTC | |
| node | multinode-165923 node delete | multinode-165923 | jenkins | v1.28.0 | 07 Nov 22 17:04 UTC | 07 Nov 22 17:04 UTC |
| | m03 | | | | | |
| stop | multinode-165923 stop | multinode-165923 | jenkins | v1.28.0 | 07 Nov 22 17:04 UTC | 07 Nov 22 17:05 UTC |
| start | -p multinode-165923 | multinode-165923 | jenkins | v1.28.0 | 07 Nov 22 17:05 UTC | 07 Nov 22 17:07 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| node | list -p multinode-165923 | multinode-165923 | jenkins | v1.28.0 | 07 Nov 22 17:07 UTC | |
| start | -p multinode-165923-m02 | multinode-165923-m02 | jenkins | v1.28.0 | 07 Nov 22 17:07 UTC | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p multinode-165923-m03 | multinode-165923-m03 | jenkins | v1.28.0 | 07 Nov 22 17:07 UTC | 07 Nov 22 17:07 UTC |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| node | add -p multinode-165923 | multinode-165923 | jenkins | v1.28.0 | 07 Nov 22 17:07 UTC | |
| delete | -p multinode-165923-m03 | multinode-165923-m03 | jenkins | v1.28.0 | 07 Nov 22 17:07 UTC | 07 Nov 22 17:07 UTC |
| delete | -p multinode-165923 | multinode-165923 | jenkins | v1.28.0 | 07 Nov 22 17:07 UTC | 07 Nov 22 17:07 UTC |
| start | -p test-preload-170735 | test-preload-170735 | jenkins | v1.28.0 | 07 Nov 22 17:07 UTC | 07 Nov 22 17:08 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.4 | | | | | |
| ssh | -p test-preload-170735 | test-preload-170735 | jenkins | v1.28.0 | 07 Nov 22 17:08 UTC | 07 Nov 22 17:08 UTC |
| | -- sudo crictl pull | | | | | |
| | gcr.io/k8s-minikube/busybox | | | | | |
| start | -p test-preload-170735 | test-preload-170735 | jenkins | v1.28.0 | 07 Nov 22 17:08 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --wait=true --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.6 | | | | | |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/11/07 17:08:27
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.19.2 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1107 17:08:27.904911 165743 out.go:296] Setting OutFile to fd 1 ...
I1107 17:08:27.905045 165743 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 17:08:27.905060 165743 out.go:309] Setting ErrFile to fd 2...
I1107 17:08:27.905068 165743 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 17:08:27.905197 165743 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-44720/.minikube/bin
I1107 17:08:27.905863 165743 out.go:303] Setting JSON to false
I1107 17:08:27.907218 165743 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10261,"bootTime":1667830647,"procs":524,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1107 17:08:27.907299 165743 start.go:126] virtualization: kvm guest
I1107 17:08:27.910260 165743 out.go:177] * [test-preload-170735] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
I1107 17:08:27.912717 165743 out.go:177] - MINIKUBE_LOCATION=15310
I1107 17:08:27.912644 165743 notify.go:220] Checking for updates...
I1107 17:08:27.914611 165743 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1107 17:08:27.916178 165743 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15310-44720/kubeconfig
I1107 17:08:27.917748 165743 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-44720/.minikube
I1107 17:08:27.919131 165743 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1107 17:08:27.921065 165743 config.go:180] Loaded profile config "test-preload-170735": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I1107 17:08:27.923047 165743 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
I1107 17:08:27.924546 165743 driver.go:365] Setting default libvirt URI to qemu:///system
I1107 17:08:27.952793 165743 docker.go:137] docker version: linux-20.10.21
I1107 17:08:27.952897 165743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1107 17:08:28.051499 165743 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:39 SystemTime:2022-11-07 17:08:27.973134397 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1107 17:08:28.051613 165743 docker.go:254] overlay module found
I1107 17:08:28.054907 165743 out.go:177] * Using the docker driver based on existing profile
I1107 17:08:28.056422 165743 start.go:282] selected driver: docker
I1107 17:08:28.056442 165743 start.go:808] validating driver "docker" against &{Name:test-preload-170735 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-170735 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1107 17:08:28.056553 165743 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1107 17:08:28.057351 165743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1107 17:08:28.151882 165743 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:39 SystemTime:2022-11-07 17:08:28.076276154 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1107 17:08:28.152201 165743 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1107 17:08:28.152232 165743 cni.go:95] Creating CNI manager for ""
I1107 17:08:28.152241 165743 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I1107 17:08:28.152260 165743 start_flags.go:317] config:
{Name:test-preload-170735 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-170735 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1107 17:08:28.155619 165743 out.go:177] * Starting control plane node test-preload-170735 in cluster test-preload-170735
I1107 17:08:28.156954 165743 cache.go:120] Beginning downloading kic base image for docker with containerd
I1107 17:08:28.158499 165743 out.go:177] * Pulling base image ...
I1107 17:08:28.159890 165743 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1107 17:08:28.159983 165743 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
I1107 17:08:28.181208 165743 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
I1107 17:08:28.181243 165743 cache.go:57] Caching tarball of preloaded images
I1107 17:08:28.181535 165743 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1107 17:08:28.183696 165743 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
I1107 17:08:28.182675 165743 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
I1107 17:08:28.183727 165743 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
I1107 17:08:28.185282 165743 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1107 17:08:28.211318 165743 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
I1107 17:08:32.100806 165743 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1107 17:08:32.100913 165743 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1107 17:08:33.024863 165743 cache.go:60] Finished verifying existence of preloaded tar for v1.24.6 on containerd
I1107 17:08:33.025006 165743 profile.go:148] Saving config to /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/config.json ...
I1107 17:08:33.025200 165743 cache.go:208] Successfully downloaded all kic artifacts
I1107 17:08:33.025245 165743 start.go:364] acquiring machines lock for test-preload-170735: {Name:mkeed53a7896dfd155258ca3d33f2ba7f27b6e3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1107 17:08:33.025355 165743 start.go:368] acquired machines lock for "test-preload-170735" in 83.257µs
I1107 17:08:33.025378 165743 start.go:96] Skipping create...Using existing machine configuration
I1107 17:08:33.025389 165743 fix.go:55] fixHost starting:
I1107 17:08:33.025604 165743 cli_runner.go:164] Run: docker container inspect test-preload-170735 --format={{.State.Status}}
I1107 17:08:33.047785 165743 fix.go:103] recreateIfNeeded on test-preload-170735: state=Running err=<nil>
W1107 17:08:33.047814 165743 fix.go:129] unexpected machine state, will restart: <nil>
I1107 17:08:33.051368 165743 out.go:177] * Updating the running docker "test-preload-170735" container ...
I1107 17:08:33.053014 165743 machine.go:88] provisioning docker machine ...
I1107 17:08:33.053055 165743 ubuntu.go:169] provisioning hostname "test-preload-170735"
I1107 17:08:33.053104 165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
I1107 17:08:33.073975 165743 main.go:134] libmachine: Using SSH client type: native
I1107 17:08:33.074165 165743 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49277 <nil> <nil>}
I1107 17:08:33.074183 165743 main.go:134] libmachine: About to run SSH command:
sudo hostname test-preload-170735 && echo "test-preload-170735" | sudo tee /etc/hostname
I1107 17:08:33.197853 165743 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-170735
I1107 17:08:33.197933 165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
I1107 17:08:33.220254 165743 main.go:134] libmachine: Using SSH client type: native
I1107 17:08:33.220408 165743 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49277 <nil> <nil>}
I1107 17:08:33.220428 165743 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\stest-preload-170735' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-170735/g' /etc/hosts;
else
echo '127.0.1.1 test-preload-170735' | sudo tee -a /etc/hosts;
fi
fi
I1107 17:08:33.333808 165743 main.go:134] libmachine: SSH cmd err, output: <nil>:
I1107 17:08:33.333842 165743 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15310-44720/.minikube CaCertPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15310-44720/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15310-44720/.minikube}
I1107 17:08:33.333861 165743 ubuntu.go:177] setting up certificates
I1107 17:08:33.333869 165743 provision.go:83] configureAuth start
I1107 17:08:33.333914 165743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-170735
I1107 17:08:33.355318 165743 provision.go:138] copyHostCerts
I1107 17:08:33.355367 165743 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-44720/.minikube/ca.pem, removing ...
I1107 17:08:33.355376 165743 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-44720/.minikube/ca.pem
I1107 17:08:33.355441 165743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15310-44720/.minikube/ca.pem (1082 bytes)
I1107 17:08:33.355534 165743 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-44720/.minikube/cert.pem, removing ...
I1107 17:08:33.355545 165743 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-44720/.minikube/cert.pem
I1107 17:08:33.355581 165743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15310-44720/.minikube/cert.pem (1123 bytes)
I1107 17:08:33.355641 165743 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-44720/.minikube/key.pem, removing ...
I1107 17:08:33.355651 165743 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-44720/.minikube/key.pem
I1107 17:08:33.355689 165743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15310-44720/.minikube/key.pem (1679 bytes)
I1107 17:08:33.355768 165743 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15310-44720/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca-key.pem org=jenkins.test-preload-170735 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-170735]
I1107 17:08:33.436719 165743 provision.go:172] copyRemoteCerts
I1107 17:08:33.436773 165743 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1107 17:08:33.436826 165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
I1107 17:08:33.458416 165743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/test-preload-170735/id_rsa Username:docker}
I1107 17:08:33.541280 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1107 17:08:33.558205 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I1107 17:08:33.574372 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1107 17:08:33.590572 165743 provision.go:86] duration metric: configureAuth took 256.685343ms
I1107 17:08:33.590604 165743 ubuntu.go:193] setting minikube options for container-runtime
I1107 17:08:33.590765 165743 config.go:180] Loaded profile config "test-preload-170735": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
I1107 17:08:33.590782 165743 machine.go:91] provisioned docker machine in 537.75012ms
I1107 17:08:33.590791 165743 start.go:300] post-start starting for "test-preload-170735" (driver="docker")
I1107 17:08:33.590802 165743 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1107 17:08:33.590840 165743 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1107 17:08:33.590874 165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
I1107 17:08:33.613972 165743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/test-preload-170735/id_rsa Username:docker}
I1107 17:08:33.697134 165743 ssh_runner.go:195] Run: cat /etc/os-release
I1107 17:08:33.699654 165743 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1107 17:08:33.699688 165743 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1107 17:08:33.699706 165743 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1107 17:08:33.699715 165743 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I1107 17:08:33.699735 165743 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-44720/.minikube/addons for local assets ...
I1107 17:08:33.699785 165743 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-44720/.minikube/files for local assets ...
I1107 17:08:33.699859 165743 filesync.go:149] local asset: /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem -> 511762.pem in /etc/ssl/certs
I1107 17:08:33.699972 165743 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1107 17:08:33.706647 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem --> /etc/ssl/certs/511762.pem (1708 bytes)
I1107 17:08:33.723587 165743 start.go:303] post-start completed in 132.77869ms
I1107 17:08:33.723655 165743 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1107 17:08:33.723701 165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
I1107 17:08:33.745091 165743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/test-preload-170735/id_rsa Username:docker}
I1107 17:08:33.826766 165743 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1107 17:08:33.830752 165743 fix.go:57] fixHost completed within 805.356487ms
I1107 17:08:33.830779 165743 start.go:83] releasing machines lock for "test-preload-170735", held for 805.406949ms
I1107 17:08:33.830865 165743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-170735
I1107 17:08:33.851188 165743 ssh_runner.go:195] Run: systemctl --version
I1107 17:08:33.851233 165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
I1107 17:08:33.851246 165743 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I1107 17:08:33.851299 165743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-170735
I1107 17:08:33.874050 165743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/test-preload-170735/id_rsa Username:docker}
I1107 17:08:33.874539 165743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15310-44720/.minikube/machines/test-preload-170735/id_rsa Username:docker}
I1107 17:08:33.970640 165743 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1107 17:08:33.980208 165743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1107 17:08:33.989283 165743 docker.go:189] disabling docker service ...
I1107 17:08:33.989328 165743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1107 17:08:33.998251 165743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1107 17:08:34.006544 165743 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1107 17:08:34.105872 165743 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1107 17:08:34.199735 165743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1107 17:08:34.208838 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1107 17:08:34.221138 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
I1107 17:08:34.228758 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
I1107 17:08:34.237433 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
I1107 17:08:34.245113 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
I1107 17:08:34.252514 165743 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1107 17:08:34.258488 165743 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1107 17:08:34.264983 165743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1107 17:08:34.355600 165743 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1107 17:08:34.426498 165743 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
I1107 17:08:34.426584 165743 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1107 17:08:34.431077 165743 start.go:472] Will wait 60s for crictl version
I1107 17:08:34.431141 165743 ssh_runner.go:195] Run: sudo crictl version
I1107 17:08:34.463332 165743 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:
stderr:
time="2022-11-07T17:08:34Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
I1107 17:08:45.511931 165743 ssh_runner.go:195] Run: sudo crictl version
I1107 17:08:45.534402 165743 start.go:481] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.6.9
RuntimeApiVersion: v1alpha2
I1107 17:08:45.534456 165743 ssh_runner.go:195] Run: containerd --version
I1107 17:08:45.557129 165743 ssh_runner.go:195] Run: containerd --version
I1107 17:08:45.581034 165743 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.9 ...
I1107 17:08:45.583252 165743 cli_runner.go:164] Run: docker network inspect test-preload-170735 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1107 17:08:45.604171 165743 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I1107 17:08:45.607584 165743 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1107 17:08:45.607660 165743 ssh_runner.go:195] Run: sudo crictl images --output json
I1107 17:08:45.629696 165743 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
I1107 17:08:45.629765 165743 ssh_runner.go:195] Run: which lz4
I1107 17:08:45.632520 165743 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I1107 17:08:45.635397 165743 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I1107 17:08:45.635419 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
I1107 17:08:46.608662 165743 containerd.go:496] Took 0.976169 seconds to copy over tarball
I1107 17:08:46.608757 165743 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I1107 17:08:49.268239 165743 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.659458437s)
I1107 17:08:49.268269 165743 containerd.go:503] Took 2.659548 seconds t extract the tarball
I1107 17:08:49.268278 165743 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1107 17:08:49.290385 165743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1107 17:08:49.394503 165743 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1107 17:08:49.483535 165743 ssh_runner.go:195] Run: sudo crictl images --output json
I1107 17:08:49.508155 165743 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.6 k8s.gcr.io/kube-controller-manager:v1.24.6 k8s.gcr.io/kube-scheduler:v1.24.6 k8s.gcr.io/kube-proxy:v1.24.6 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
I1107 17:08:49.508249 165743 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I1107 17:08:49.508261 165743 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.6
I1107 17:08:49.508303 165743 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
I1107 17:08:49.508328 165743 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.6
I1107 17:08:49.508333 165743 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
I1107 17:08:49.508363 165743 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.6
I1107 17:08:49.508413 165743 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.6
I1107 17:08:49.508304 165743 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
I1107 17:08:49.509646 165743 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.6: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.6
I1107 17:08:49.509674 165743 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
I1107 17:08:49.509722 165743 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.6: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.6
I1107 17:08:49.509649 165743 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I1107 17:08:49.509638 165743 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.6: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.6
I1107 17:08:49.509650 165743 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
I1107 17:08:49.509774 165743 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
I1107 17:08:49.509643 165743 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.6: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.6
I1107 17:08:49.721200 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
I1107 17:08:49.721693 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
I1107 17:08:49.738860 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
I1107 17:08:49.739213 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.6"
I1107 17:08:49.747795 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
I1107 17:08:49.758483 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.6"
I1107 17:08:49.761130 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6"
I1107 17:08:49.977049 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6"
I1107 17:08:50.610195 165743 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
I1107 17:08:50.610249 165743 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
I1107 17:08:50.610292 165743 ssh_runner.go:195] Run: which crictl
I1107 17:08:50.614352 165743 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
I1107 17:08:50.614406 165743 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
I1107 17:08:50.614453 165743 ssh_runner.go:195] Run: which crictl
I1107 17:08:50.705332 165743 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
I1107 17:08:50.705390 165743 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
I1107 17:08:50.705338 165743 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.6" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.6" does not exist at hash "0bb39497ab33bb5f8aaff88ced53a5fcd360fcf5da647609619d4f5c8f1483d2" in container runtime
I1107 17:08:50.705434 165743 ssh_runner.go:195] Run: which crictl
I1107 17:08:50.705452 165743 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.6
I1107 17:08:50.705619 165743 ssh_runner.go:195] Run: which crictl
I1107 17:08:50.717541 165743 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
I1107 17:08:50.717591 165743 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I1107 17:08:50.717638 165743 ssh_runner.go:195] Run: which crictl
I1107 17:08:50.719439 165743 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.6" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.6" does not exist at hash "860f263331c9513ddab44d4d8a9a4a7304313b3aa0776decc1d7fc6acdd69ab0" in container runtime
I1107 17:08:50.719499 165743 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.6
I1107 17:08:50.719544 165743 ssh_runner.go:195] Run: which crictl
I1107 17:08:50.719689 165743 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.6" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.6" does not exist at hash "c6c20157a42337ecb7675be59e1dc34bc5a91288c7eeac1e30ec97767a9055eb" in container runtime
I1107 17:08:50.719723 165743 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.6
I1107 17:08:50.719758 165743 ssh_runner.go:195] Run: which crictl
I1107 17:08:50.814270 165743 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.6" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.6" does not exist at hash "c786c777a4e1c21907e77042428837645fa382d3bd14925cf78f0d163d6d332e" in container runtime
I1107 17:08:50.814353 165743 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.6
I1107 17:08:50.814361 165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
I1107 17:08:50.814382 165743 ssh_runner.go:195] Run: which crictl
I1107 17:08:50.814394 165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
I1107 17:08:50.814410 165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6
I1107 17:08:50.814414 165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1107 17:08:50.814427 165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6
I1107 17:08:50.814384 165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
I1107 17:08:50.814449 165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6
I1107 17:08:52.582624 165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6: (1.768192619s)
I1107 17:08:52.582662 165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
I1107 17:08:52.582681 165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0: (1.768236997s)
I1107 17:08:52.582691 165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
I1107 17:08:52.582637 165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.768194557s)
I1107 17:08:52.582747 165743 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
I1107 17:08:52.582772 165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6: (1.768339669s)
I1107 17:08:52.582798 165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6
I1107 17:08:52.582748 165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
I1107 17:08:52.582749 165743 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
I1107 17:08:52.582829 165743 ssh_runner.go:235] Completed: which crictl: (1.768411501s)
I1107 17:08:52.582855 165743 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
I1107 17:08:52.582878 165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6
I1107 17:08:52.585359 165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6: (1.770910623s)
I1107 17:08:52.585380 165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6
I1107 17:08:52.585416 165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7: (1.771036539s)
I1107 17:08:52.585438 165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
I1107 17:08:52.585502 165743 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
I1107 17:08:52.585583 165743 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6: (1.771118502s)
I1107 17:08:52.585599 165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6
I1107 17:08:52.587242 165743 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
I1107 17:08:52.587261 165743 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
I1107 17:08:52.587294 165743 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
I1107 17:08:52.676919 165743 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
I1107 17:08:52.677014 165743 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6
I1107 17:08:52.677049 165743 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
I1107 17:08:52.677110 165743 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
I1107 17:09:00.039059 165743 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (7.451733367s)
I1107 17:09:00.039096 165743 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
I1107 17:09:00.039139 165743 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.8.6
I1107 17:09:00.039203 165743 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
I1107 17:09:01.824108 165743 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6: (1.784848281s)
I1107 17:09:01.824150 165743 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
I1107 17:09:01.824181 165743 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I1107 17:09:01.824223 165743 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
I1107 17:09:02.321028 165743 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I1107 17:09:02.321067 165743 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.7
I1107 17:09:02.321122 165743 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
I1107 17:09:02.521066 165743 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
I1107 17:09:02.521129 165743 cache_images.go:92] LoadImages completed in 13.012944956s
W1107 17:09:02.521265 165743 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15310-44720/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6: no such file or directory
I1107 17:09:02.521313 165743 ssh_runner.go:195] Run: sudo crictl info
I1107 17:09:02.549803 165743 cni.go:95] Creating CNI manager for ""
I1107 17:09:02.549843 165743 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I1107 17:09:02.549862 165743 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1107 17:09:02.549885 165743 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-170735 NodeName:test-preload-170735 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I1107 17:09:02.550126 165743 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "test-preload-170735"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1107 17:09:02.550287 165743 kubeadm.go:962] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-170735 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.6 ClusterName:test-preload-170735 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1107 17:09:02.550387 165743 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
I1107 17:09:02.558461 165743 binaries.go:44] Found k8s binaries, skipping transfer
I1107 17:09:02.558534 165743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1107 17:09:02.609209 165743 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (512 bytes)
I1107 17:09:02.622855 165743 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1107 17:09:02.636362 165743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
I1107 17:09:02.650109 165743 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I1107 17:09:02.653949 165743 certs.go:54] Setting up /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735 for IP: 192.168.67.2
I1107 17:09:02.654100 165743 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15310-44720/.minikube/ca.key
I1107 17:09:02.654166 165743 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15310-44720/.minikube/proxy-client-ca.key
I1107 17:09:02.654255 165743 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/client.key
I1107 17:09:02.654354 165743 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/apiserver.key.c7fa3a9e
I1107 17:09:02.654418 165743 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/proxy-client.key
I1107 17:09:02.654554 165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/51176.pem (1338 bytes)
W1107 17:09:02.654595 165743 certs.go:384] ignoring /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/51176_empty.pem, impossibly tiny 0 bytes
I1107 17:09:02.654613 165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca-key.pem (1679 bytes)
I1107 17:09:02.654657 165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/ca.pem (1082 bytes)
I1107 17:09:02.654702 165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/cert.pem (1123 bytes)
I1107 17:09:02.654738 165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/certs/home/jenkins/minikube-integration/15310-44720/.minikube/certs/key.pem (1679 bytes)
I1107 17:09:02.654791 165743 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem (1708 bytes)
I1107 17:09:02.655574 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1107 17:09:02.703678 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1107 17:09:02.723409 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1107 17:09:02.742737 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1107 17:09:02.763001 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1107 17:09:02.818366 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1107 17:09:02.839767 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1107 17:09:02.861717 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1107 17:09:02.910886 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/certs/51176.pem --> /usr/share/ca-certificates/51176.pem (1338 bytes)
I1107 17:09:02.931102 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/files/etc/ssl/certs/511762.pem --> /usr/share/ca-certificates/511762.pem (1708 bytes)
I1107 17:09:02.951804 165743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-44720/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1107 17:09:03.011717 165743 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1107 17:09:03.027317 165743 ssh_runner.go:195] Run: openssl version
I1107 17:09:03.032867 165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1107 17:09:03.041130 165743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1107 17:09:03.044672 165743 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov 7 16:46 /usr/share/ca-certificates/minikubeCA.pem
I1107 17:09:03.044721 165743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1107 17:09:03.050588 165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1107 17:09:03.105632 165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/51176.pem && ln -fs /usr/share/ca-certificates/51176.pem /etc/ssl/certs/51176.pem"
I1107 17:09:03.114215 165743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/51176.pem
I1107 17:09:03.117586 165743 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov 7 16:50 /usr/share/ca-certificates/51176.pem
I1107 17:09:03.117644 165743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/51176.pem
I1107 17:09:03.123353 165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/51176.pem /etc/ssl/certs/51391683.0"
I1107 17:09:03.131017 165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/511762.pem && ln -fs /usr/share/ca-certificates/511762.pem /etc/ssl/certs/511762.pem"
I1107 17:09:03.139872 165743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/511762.pem
I1107 17:09:03.143694 165743 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov 7 16:50 /usr/share/ca-certificates/511762.pem
I1107 17:09:03.143738 165743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/511762.pem
I1107 17:09:03.149761 165743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/511762.pem /etc/ssl/certs/3ec20f2e.0"
I1107 17:09:03.209904 165743 kubeadm.go:396] StartCluster: {Name:test-preload-170735 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-170735 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1107 17:09:03.210035 165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1107 17:09:03.210092 165743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1107 17:09:03.240135 165743 cri.go:87] found id: "bbc8111955475e273c589af8ebe48cc22947c192b9004953ca28f3abd9af9206"
I1107 17:09:03.240172 165743 cri.go:87] found id: "0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834"
I1107 17:09:03.240181 165743 cri.go:87] found id: ""
I1107 17:09:03.240225 165743 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I1107 17:09:03.327373 165743 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"0314116d648233d6c1e60ed5a556a815105434479c9a17285a7cd8dc23953bc5","pid":1641,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0314116d648233d6c1e60ed5a556a815105434479c9a17285a7cd8dc23953bc5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0314116d648233d6c1e60ed5a556a815105434479c9a17285a7cd8dc23953bc5/rootfs","created":"2022-11-07T17:07:57.155832841Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri.sandbox-id":"705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86","io.kubernetes.cri.sandbox-name":"etcd-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa","pid":3510,"status":"running","bundle":"/ru
n/containerd/io.containerd.runtime.v2.task/k8s.io/0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa/rootfs","created":"2022-11-07T17:08:53.110308717Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-lv445_fcbfbd08-498e-4a9c-8d36-0d45cbd312bd","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-lv445","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834","pid":3658,"status":"running","bundle":"/run/containerd/
io.containerd.runtime.v2.task/k8s.io/0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834/rootfs","created":"2022-11-07T17:08:54.456156833Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4","pid":2180,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/250fd
604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4/rootfs","created":"2022-11-07T17:08:16.602156421Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-fh9w9_eca84e65-57b5-4cc9-b42a-0f991c91ffe7","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-fh9w9","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2d4d536c9a0a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d","pid":3521,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d4d536c9a0a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d4d536c9a0
a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d/rootfs","created":"2022-11-07T17:08:53.110915142Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"2d4d536c9a0a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-170735_11f8c11ccd07f3d1eb49f811a3342256","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95","pid":1505,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95","rootfs":"/run/containerd/io.containerd.runtime.v2.ta
sk/k8s.io/37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95/rootfs","created":"2022-11-07T17:07:56.942370634Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-170735_11f8c11ccd07f3d1eb49f811a3342256","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049","pid":3522,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049","rootfs":"/run/containerd/io.con
tainerd.runtime.v2.task/k8s.io/3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049/rootfs","created":"2022-11-07T17:08:53.027578577Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-170735_809d9df5626cf37e910052830f1a68d3","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623","pid":2181,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba6
8a623","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623/rootfs","created":"2022-11-07T17:08:16.461925695Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-lv445_fcbfbd08-498e-4a9c-8d36-0d45cbd312bd","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-lv445","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"46a2f3bebabe1b18bc1bb0a2815efd01f85119114dd473e67a6ef5ed94353067","pid":2431,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46a2f3bebabe1b18bc1bb0a2815efd01f85119114dd473e67a6ef5ed94353067","rootf
s":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46a2f3bebabe1b18bc1bb0a2815efd01f85119114dd473e67a6ef5ed94353067/rootfs","created":"2022-11-07T17:08:19.802116354Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20221004-44d545d1","io.kubernetes.cri.sandbox-id":"250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4","io.kubernetes.cri.sandbox-name":"kindnet-fh9w9","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5bee5419fd26f0844e516ee32486faead6e58bf4501faaf52d7a05e85ca46114","pid":2624,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5bee5419fd26f0844e516ee32486faead6e58bf4501faaf52d7a05e85ca46114","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5bee5419fd26f0844e516ee32486faead6e58bf4501faaf52d7a05e85ca46114/rootfs","created":"2022-11-07T17:08:24.414118976Z","annotati
ons":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e","pid":3576,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e/rootfs","created":"2022-11-07T17:08:53.22282877Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-
shares":"102","io.kubernetes.cri.sandbox-id":"5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-46n4z_0bb47afc-9c44-48b3-8dd4-966ed2608a7a","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-46n4z","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83","pid":3544,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83/rootfs","created":"2022-11-07T17:08:53.114873995Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri
.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-170735_62ea0ae7f0dd287c41e3fc4d83f43bcc","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86","pid":1509,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86/rootfs","created":"2022-11-07T17:07:56.942483078Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cr
i.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-170735_62ea0ae7f0dd287c41e3fc4d83f43bcc","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da","pid":1511,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da/rootfs","created":"2022-11-07T17:07:56.942394808Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.c
ri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-170735_809d9df5626cf37e910052830f1a68d3","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90","pid":2564,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90/rootfs","created":"2022-11-07T17:08:24.30208689Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.s
andbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_c43d0d64-f743-4627-894e-be6b8af2e64d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7ebafae905092334183576510f909bae93bd084561b7b4a27b2c106d54be85e8","pid":2247,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ebafae905092334183576510f909bae93bd084561b7b4a27b2c106d54be85e8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ebafae905092334183576510f909bae93bd084561b7b4a27b2c106d54be85e8/rootfs","created":"2022-11-07T17:08:16.619320417Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-
name":"k8s.gcr.io/kube-proxy:v1.24.4","io.kubernetes.cri.sandbox-id":"415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623","io.kubernetes.cri.sandbox-name":"kube-proxy-lv445","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9af9bcc1e7bfa1fcebd60598922c92115f437177f9d295842f96abde73b0517a","pid":1639,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9af9bcc1e7bfa1fcebd60598922c92115f437177f9d295842f96abde73b0517a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9af9bcc1e7bfa1fcebd60598922c92115f437177f9d295842f96abde73b0517a/rootfs","created":"2022-11-07T17:07:57.155960118Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-apiserver:v1.24.4","io.kubernetes.cri.sandbox-id":"37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95","io.kubernetes.cri.sandbox-name":"kube-apiserver-tes
t-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593","pid":2565,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593/rootfs","created":"2022-11-07T17:08:24.301147925Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-46n4z_0bb47afc-9c44-48b3-8dd4-966ed2608a7a","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-na
me":"coredns-6d4b75cb6d-46n4z","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74","pid":1510,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74/rootfs","created":"2022-11-07T17:07:56.942447268Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-170735_d3532015a9097ea10a4280936fe474ca","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri
.sandbox-name":"kube-scheduler-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b8770899fd0b74b30df43ba6cfadcb7b183dddc9100af7c7bc1df4a3e6065247","pid":2623,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8770899fd0b74b30df43ba6cfadcb7b183dddc9100af7c7bc1df4a3e6065247","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8770899fd0b74b30df43ba6cfadcb7b183dddc9100af7c7bc1df4a3e6065247/rootfs","created":"2022-11-07T17:08:24.411783378Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-46n4z","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d596e727cf71ed6c642b598c327f52552f
ba8f973625380adcf054e3f5d2d1c6","pid":1642,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d596e727cf71ed6c642b598c327f52552fba8f973625380adcf054e3f5d2d1c6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d596e727cf71ed6c642b598c327f52552fba8f973625380adcf054e3f5d2d1c6/rootfs","created":"2022-11-07T17:07:57.156067666Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-controller-manager:v1.24.4","io.kubernetes.cri.sandbox-id":"76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ddefa3ac5399737dff7110fcaf80962425381646c55d72cc70f71a263df0342a","pid":3553,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddefa3ac5399737dff
7110fcaf80962425381646c55d72cc70f71a263df0342a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddefa3ac5399737dff7110fcaf80962425381646c55d72cc70f71a263df0342a/rootfs","created":"2022-11-07T17:08:53.113518089Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ddefa3ac5399737dff7110fcaf80962425381646c55d72cc70f71a263df0342a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-170735_d3532015a9097ea10a4280936fe474ca","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ea6df2fe58eeb1388803a45c064d70377759e51f38f946f4ea3630da79dc69a6","pid":1640,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.
io/ea6df2fe58eeb1388803a45c064d70377759e51f38f946f4ea3630da79dc69a6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea6df2fe58eeb1388803a45c064d70377759e51f38f946f4ea3630da79dc69a6/rootfs","created":"2022-11-07T17:07:57.156161632Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-scheduler:v1.24.4","io.kubernetes.cri.sandbox-id":"a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-170735","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6","pid":3562,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f82c54e5c1fb4c8247a99e96a8cf288d1c50b2
7e3b90db040e3d9988132681f6/rootfs","created":"2022-11-07T17:08:53.114973557Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_c43d0d64-f743-4627-894e-be6b8af2e64d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fbde5a10025abb05664ed","pid":3518,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fbde5a10025abb05664ed","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fb
de5a10025abb05664ed/rootfs","created":"2022-11-07T17:08:53.111272121Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fbde5a10025abb05664ed","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-fh9w9_eca84e65-57b5-4cc9-b42a-0f991c91ffe7","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-fh9w9","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"}]
I1107 17:09:03.327859 165743 cri.go:124] list returned 25 containers
I1107 17:09:03.327880 165743 cri.go:127] container: {ID:0314116d648233d6c1e60ed5a556a815105434479c9a17285a7cd8dc23953bc5 Status:running}
I1107 17:09:03.327898 165743 cri.go:129] skipping 0314116d648233d6c1e60ed5a556a815105434479c9a17285a7cd8dc23953bc5 - not in ps
I1107 17:09:03.327906 165743 cri.go:127] container: {ID:0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa Status:running}
I1107 17:09:03.327915 165743 cri.go:129] skipping 0db62b45b89de774d85d268732de085fe12b9045e1c19792e1a8a7762a41a5aa - not in ps
I1107 17:09:03.327927 165743 cri.go:127] container: {ID:0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834 Status:running}
I1107 17:09:03.327939 165743 cri.go:133] skipping {0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834 running}: state = "running", want "paused"
I1107 17:09:03.327954 165743 cri.go:127] container: {ID:250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4 Status:running}
I1107 17:09:03.327966 165743 cri.go:129] skipping 250fd604c9fb7454383acc4ff70415d383a9cf0481b9200f9670707b2e744be4 - not in ps
I1107 17:09:03.327973 165743 cri.go:127] container: {ID:2d4d536c9a0a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d Status:running}
I1107 17:09:03.327986 165743 cri.go:129] skipping 2d4d536c9a0a40d49c0246daa72b6615857bf6fe87f3d15e95a21a7878e5101d - not in ps
I1107 17:09:03.328004 165743 cri.go:127] container: {ID:37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95 Status:running}
I1107 17:09:03.328018 165743 cri.go:129] skipping 37b02358b7bade85f9ecdfb958e54a66ddbeda36fd5f7eaf12e0bdd9398d5b95 - not in ps
I1107 17:09:03.328029 165743 cri.go:127] container: {ID:3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049 Status:running}
I1107 17:09:03.328041 165743 cri.go:129] skipping 3feddb0dbdb52435facf4a9e8b5290241f16d8c1a930b0d6090df45977832049 - not in ps
I1107 17:09:03.328047 165743 cri.go:127] container: {ID:415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623 Status:running}
I1107 17:09:03.328060 165743 cri.go:129] skipping 415576cdc8f40f5fc3f6a7438ecb0ffb290f93f316f6b054f7a0f5caba68a623 - not in ps
I1107 17:09:03.328071 165743 cri.go:127] container: {ID:46a2f3bebabe1b18bc1bb0a2815efd01f85119114dd473e67a6ef5ed94353067 Status:running}
I1107 17:09:03.328082 165743 cri.go:129] skipping 46a2f3bebabe1b18bc1bb0a2815efd01f85119114dd473e67a6ef5ed94353067 - not in ps
I1107 17:09:03.328092 165743 cri.go:127] container: {ID:5bee5419fd26f0844e516ee32486faead6e58bf4501faaf52d7a05e85ca46114 Status:running}
I1107 17:09:03.328100 165743 cri.go:129] skipping 5bee5419fd26f0844e516ee32486faead6e58bf4501faaf52d7a05e85ca46114 - not in ps
I1107 17:09:03.328107 165743 cri.go:127] container: {ID:5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e Status:running}
I1107 17:09:03.328121 165743 cri.go:129] skipping 5e48addfe561771f69f67c220475d6917957100e69095105320537af7d0b949e - not in ps
I1107 17:09:03.328132 165743 cri.go:127] container: {ID:5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83 Status:running}
I1107 17:09:03.328144 165743 cri.go:129] skipping 5ec95a28b0b3d4879dcc46fb1204c97678b5dcc9326ba25e57ff05480a153e83 - not in ps
I1107 17:09:03.328150 165743 cri.go:127] container: {ID:705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86 Status:running}
I1107 17:09:03.328169 165743 cri.go:129] skipping 705f6c5ec34a5c35201b86083eae5b20aa3092c970306581dbf6500d08277f86 - not in ps
I1107 17:09:03.328181 165743 cri.go:127] container: {ID:76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da Status:running}
I1107 17:09:03.328188 165743 cri.go:129] skipping 76644cc52c0f394f09520836294eb59805f2485e8b23588a7b7c930a102977da - not in ps
I1107 17:09:03.328199 165743 cri.go:127] container: {ID:7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90 Status:running}
I1107 17:09:03.328209 165743 cri.go:129] skipping 7c5dde526df8a7e840df241f81375cff02de464862ccf3844a2728ca17764c90 - not in ps
I1107 17:09:03.328214 165743 cri.go:127] container: {ID:7ebafae905092334183576510f909bae93bd084561b7b4a27b2c106d54be85e8 Status:running}
I1107 17:09:03.328223 165743 cri.go:129] skipping 7ebafae905092334183576510f909bae93bd084561b7b4a27b2c106d54be85e8 - not in ps
I1107 17:09:03.328229 165743 cri.go:127] container: {ID:9af9bcc1e7bfa1fcebd60598922c92115f437177f9d295842f96abde73b0517a Status:running}
I1107 17:09:03.328241 165743 cri.go:129] skipping 9af9bcc1e7bfa1fcebd60598922c92115f437177f9d295842f96abde73b0517a - not in ps
I1107 17:09:03.328248 165743 cri.go:127] container: {ID:a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593 Status:running}
I1107 17:09:03.328263 165743 cri.go:129] skipping a597437e00c1cbee43bab5dbd971df23b2c78c7ba933574af3a62f4511d41593 - not in ps
I1107 17:09:03.328275 165743 cri.go:127] container: {ID:a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74 Status:running}
I1107 17:09:03.328287 165743 cri.go:129] skipping a768e9176c7556574834f51de6dfc34d5c6228886484db628fa022d4cc609d74 - not in ps
I1107 17:09:03.328297 165743 cri.go:127] container: {ID:b8770899fd0b74b30df43ba6cfadcb7b183dddc9100af7c7bc1df4a3e6065247 Status:running}
I1107 17:09:03.328308 165743 cri.go:129] skipping b8770899fd0b74b30df43ba6cfadcb7b183dddc9100af7c7bc1df4a3e6065247 - not in ps
I1107 17:09:03.328318 165743 cri.go:127] container: {ID:d596e727cf71ed6c642b598c327f52552fba8f973625380adcf054e3f5d2d1c6 Status:running}
I1107 17:09:03.328326 165743 cri.go:129] skipping d596e727cf71ed6c642b598c327f52552fba8f973625380adcf054e3f5d2d1c6 - not in ps
I1107 17:09:03.328337 165743 cri.go:127] container: {ID:ddefa3ac5399737dff7110fcaf80962425381646c55d72cc70f71a263df0342a Status:running}
I1107 17:09:03.328349 165743 cri.go:129] skipping ddefa3ac5399737dff7110fcaf80962425381646c55d72cc70f71a263df0342a - not in ps
I1107 17:09:03.328358 165743 cri.go:127] container: {ID:ea6df2fe58eeb1388803a45c064d70377759e51f38f946f4ea3630da79dc69a6 Status:running}
I1107 17:09:03.328370 165743 cri.go:129] skipping ea6df2fe58eeb1388803a45c064d70377759e51f38f946f4ea3630da79dc69a6 - not in ps
I1107 17:09:03.328381 165743 cri.go:127] container: {ID:f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6 Status:running}
I1107 17:09:03.328391 165743 cri.go:129] skipping f82c54e5c1fb4c8247a99e96a8cf288d1c50b27e3b90db040e3d9988132681f6 - not in ps
I1107 17:09:03.328404 165743 cri.go:127] container: {ID:f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fbde5a10025abb05664ed Status:running}
I1107 17:09:03.328415 165743 cri.go:129] skipping f9e6c7652c1304ceea0e17fabb8f5fb88b5c0f31719fbde5a10025abb05664ed - not in ps
I1107 17:09:03.328459 165743 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1107 17:09:03.336550 165743 kubeadm.go:411] found existing configuration files, will attempt cluster restart
I1107 17:09:03.336573 165743 kubeadm.go:627] restartCluster start
I1107 17:09:03.336628 165743 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1107 17:09:03.344380 165743 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1107 17:09:03.345034 165743 kubeconfig.go:92] found "test-preload-170735" server: "https://192.168.67.2:8443"
I1107 17:09:03.345729 165743 kapi.go:59] client config for test-preload-170735: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/client.crt", KeyFile:"/home/jenkins/minikube-integration/15310-44720/.minikube/profiles/test-preload-170735/client.key", CAFile:"/home/jenkins/minikube-integration/15310-44720/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1107 17:09:03.346403 165743 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1107 17:09:03.402000 165743 kubeadm.go:594] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml 2022-11-07 17:07:52.875254223 +0000
+++ /var/tmp/minikube/kubeadm.yaml.new 2022-11-07 17:09:02.646277681 +0000
@@ -38,7 +38,7 @@
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
-kubernetesVersion: v1.24.4
+kubernetesVersion: v1.24.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
-- /stdout --
I1107 17:09:03.402024 165743 kubeadm.go:1114] stopping kube-system containers ...
I1107 17:09:03.402039 165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I1107 17:09:03.402098 165743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1107 17:09:03.431844 165743 cri.go:87] found id: "bbc8111955475e273c589af8ebe48cc22947c192b9004953ca28f3abd9af9206"
I1107 17:09:03.431899 165743 cri.go:87] found id: "0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834"
I1107 17:09:03.431910 165743 cri.go:87] found id: ""
I1107 17:09:03.431917 165743 cri.go:232] Stopping containers: [bbc8111955475e273c589af8ebe48cc22947c192b9004953ca28f3abd9af9206 0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834]
I1107 17:09:03.431974 165743 ssh_runner.go:195] Run: which crictl
I1107 17:09:03.436330 165743 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop bbc8111955475e273c589af8ebe48cc22947c192b9004953ca28f3abd9af9206 0f8f18b7cc72dccd9c44995e4eaae4c691123d24b079b52812484a2b8b9fa834
I1107 17:09:03.742156 165743 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I1107 17:09:03.809643 165743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1107 17:09:03.817012 165743 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5643 Nov 7 17:07 /etc/kubernetes/admin.conf
-rw------- 1 root root 5656 Nov 7 17:07 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2015 Nov 7 17:08 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5604 Nov 7 17:07 /etc/kubernetes/scheduler.conf
I1107 17:09:03.817084 165743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1107 17:09:03.823720 165743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1107 17:09:03.830244 165743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1107 17:09:03.836663 165743 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I1107 17:09:03.836710 165743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1107 17:09:03.842795 165743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1107 17:09:03.849520 165743 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I1107 17:09:03.849574 165743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1107 17:09:03.856003 165743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1107 17:09:03.862911 165743 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I1107 17:09:03.862935 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I1107 17:09:04.002289 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I1107 17:09:05.237323 165743 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.234999973s)
I1107 17:09:05.237359 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I1107 17:09:05.449035 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I1107 17:09:05.504177 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I1107 17:09:05.621639 165743 api_server.go:51] waiting for apiserver process to appear ...
I1107 17:09:05.621702 165743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1107 17:09:05.633566 165743 api_server.go:71] duration metric: took 11.935157ms to wait for apiserver process to appear ...
I1107 17:09:05.633600 165743 api_server.go:87] waiting for apiserver healthz status ...
I1107 17:09:05.633614 165743 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I1107 17:09:05.639393 165743 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
ok
I1107 17:09:05.645496 165743 api_server.go:140] control plane version: v1.24.4
W1107 17:09:05.645524 165743 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1107 17:09:06.147196 165743 api_server.go:140] control plane version: v1.24.4
W1107 17:09:06.147277 165743 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1107 17:09:06.646924 165743 api_server.go:140] control plane version: v1.24.4
W1107 17:09:06.646957 165743 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1107 17:09:07.147645 165743 api_server.go:140] control plane version: v1.24.4
W1107 17:09:07.147679 165743 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1107 17:09:07.647341 165743 api_server.go:140] control plane version: v1.24.4
W1107 17:09:07.647372 165743 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
W1107 17:09:08.146168 165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1107 17:09:08.646046 165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1107 17:09:09.147144 165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1107 17:09:09.646092 165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1107 17:09:10.147021 165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1107 17:09:10.646973 165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1107 17:09:11.146883 165743 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
I1107 17:09:13.915841 165743 api_server.go:140] control plane version: v1.24.6
I1107 17:09:13.915921 165743 api_server.go:130] duration metric: took 8.282312967s to wait for apiserver health ...
I1107 17:09:13.915945 165743 cni.go:95] Creating CNI manager for ""
I1107 17:09:13.915963 165743 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I1107 17:09:13.918212 165743 out.go:177] * Configuring CNI (Container Networking Interface) ...
I1107 17:09:13.919726 165743 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I1107 17:09:13.924616 165743 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.6/kubectl ...
I1107 17:09:13.924640 165743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I1107 17:09:14.021282 165743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1107 17:09:15.124609 165743 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.103271829s)
I1107 17:09:15.124658 165743 system_pods.go:43] waiting for kube-system pods to appear ...
I1107 17:09:15.134287 165743 system_pods.go:59] 8 kube-system pods found
I1107 17:09:15.134343 165743 system_pods.go:61] "coredns-6d4b75cb6d-46n4z" [0bb47afc-9c44-48b3-8dd4-966ed2608a7a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1107 17:09:15.134355 165743 system_pods.go:61] "etcd-test-preload-170735" [bf983595-48b0-4ad3-948e-264fe4654767] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1107 17:09:15.134365 165743 system_pods.go:61] "kindnet-fh9w9" [eca84e65-57b5-4cc9-b42a-0f991c91ffe7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I1107 17:09:15.134375 165743 system_pods.go:61] "kube-apiserver-test-preload-170735" [6005f40b-0034-46af-ac9b-8b7945ea8996] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1107 17:09:15.134382 165743 system_pods.go:61] "kube-controller-manager-test-preload-170735" [05e955ad-7fc3-4874-97a5-7ba8ee0faf37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1107 17:09:15.134396 165743 system_pods.go:61] "kube-proxy-lv445" [fcbfbd08-498e-4a9c-8d36-0d45cbd312bd] Running
I1107 17:09:15.134404 165743 system_pods.go:61] "kube-scheduler-test-preload-170735" [102796b5-9e64-4c55-9ceb-c091fb0faf8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1107 17:09:15.134416 165743 system_pods.go:61] "storage-provisioner" [c43d0d64-f743-4627-894e-be6b8af2e64d] Running
I1107 17:09:15.134425 165743 system_pods.go:74] duration metric: took 9.760603ms to wait for pod list to return data ...
I1107 17:09:15.134434 165743 node_conditions.go:102] verifying NodePressure condition ...
I1107 17:09:15.136728 165743 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1107 17:09:15.136759 165743 node_conditions.go:123] node cpu capacity is 8
I1107 17:09:15.136770 165743 node_conditions.go:105] duration metric: took 2.331494ms to run NodePressure ...
I1107 17:09:15.136786 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I1107 17:09:15.388874 165743 kubeadm.go:763] waiting for restarted kubelet to initialise ...
I1107 17:09:15.392441 165743 kubeadm.go:778] kubelet initialised
I1107 17:09:15.392464 165743 kubeadm.go:779] duration metric: took 3.557352ms waiting for restarted kubelet to initialise ...
I1107 17:09:15.392473 165743 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1107 17:09:15.396706 165743 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace to be "Ready" ...
I1107 17:09:17.406088 165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:19.407719 165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:21.906077 165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:23.906170 165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:25.906482 165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:28.406244 165743 pod_ready.go:102] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:29.906673 165743 pod_ready.go:92] pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace has status "Ready":"True"
I1107 17:09:29.906708 165743 pod_ready.go:81] duration metric: took 14.509975616s waiting for pod "coredns-6d4b75cb6d-46n4z" in "kube-system" namespace to be "Ready" ...
I1107 17:09:29.906722 165743 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-170735" in "kube-system" namespace to be "Ready" ...
I1107 17:09:31.916347 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:33.916395 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:35.917695 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:38.416611 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:40.417341 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:42.917030 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:44.917463 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:47.417821 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:49.916882 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:52.417257 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:54.916575 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:56.916604 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:09:58.917108 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:01.417633 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:03.917219 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:06.416808 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:08.917079 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:11.417333 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:13.417408 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:15.917166 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:18.415994 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:20.416647 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:22.917094 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:24.919800 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:27.416902 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:29.417714 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:31.917189 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:34.417311 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:36.916350 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:38.917416 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:41.416812 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:43.417080 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:45.916487 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:47.917346 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:50.416654 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:52.917124 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:55.416999 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:57.417311 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:10:59.916704 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:01.919070 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:04.416758 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:06.416952 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:08.916903 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:11.416562 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:13.417202 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:15.917270 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:18.416813 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:20.917286 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:23.416732 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:25.417405 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:27.916529 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:29.916950 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:32.417231 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:34.916940 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:37.416873 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:39.417294 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:41.916140 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:43.916375 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:45.916655 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:47.916977 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:50.416682 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:52.417097 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:54.916635 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:57.416816 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:11:59.916263 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:01.916974 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:03.917239 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:06.416793 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:08.417072 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:10.916349 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:13.416821 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:15.916263 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:17.916820 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:19.917768 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:22.416608 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:24.417657 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:26.916718 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:28.916894 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:31.417519 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:33.418814 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:35.916938 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:38.416980 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:40.916839 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:42.917145 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:44.917492 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:47.417047 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:49.916565 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:51.916916 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:54.416695 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:56.419030 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:12:58.916323 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:00.917565 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:03.416572 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:05.416612 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:07.917363 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:10.416406 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:12.416604 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:14.916267 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:16.916810 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:19.417492 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:21.916818 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:23.917104 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:26.416941 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:28.916283 165743 pod_ready.go:102] pod "etcd-test-preload-170735" in "kube-system" namespace has status "Ready":"False"
I1107 17:13:29.912039 165743 pod_ready.go:81] duration metric: took 4m0.005300509s waiting for pod "etcd-test-preload-170735" in "kube-system" namespace to be "Ready" ...
E1107 17:13:29.912067 165743 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "etcd-test-preload-170735" in "kube-system" namespace to be "Ready" (will not retry!)
I1107 17:13:29.912099 165743 pod_ready.go:38] duration metric: took 4m14.519613554s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1107 17:13:29.912140 165743 kubeadm.go:631] restartCluster took 4m26.575555046s
W1107 17:13:29.912302 165743 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
I1107 17:13:29.912357 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1107 17:13:31.585704 165743 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.673321164s)
I1107 17:13:31.585763 165743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1107 17:13:31.595197 165743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1107 17:13:31.601977 165743 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1107 17:13:31.602022 165743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1107 17:13:31.608611 165743 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1107 17:13:31.608656 165743 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1107 17:13:31.641698 165743 kubeadm.go:317] W1107 17:13:31.640965 6500 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I1107 17:13:31.673782 165743 kubeadm.go:317] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
I1107 17:13:31.734442 165743 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1107 17:13:31.734566 165743 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
I1107 17:13:31.734625 165743 kubeadm.go:317] [ERROR Port-2379]: Port 2379 is in use
I1107 17:13:31.734689 165743 kubeadm.go:317] [ERROR Port-2380]: Port 2380 is in use
I1107 17:13:31.734827 165743 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
I1107 17:13:31.734917 165743 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
I1107 17:13:31.736598 165743 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
I1107 17:13:31.736666 165743 kubeadm.go:317] [preflight] Running pre-flight checks
I1107 17:13:31.736791 165743 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
I1107 17:13:31.736841 165743 kubeadm.go:317] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
I1107 17:13:31.736892 165743 kubeadm.go:317] [0;37mOS[0m: [0;32mLinux[0m
I1107 17:13:31.736952 165743 kubeadm.go:317] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1107 17:13:31.737020 165743 kubeadm.go:317] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1107 17:13:31.737089 165743 kubeadm.go:317] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1107 17:13:31.737161 165743 kubeadm.go:317] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1107 17:13:31.737230 165743 kubeadm.go:317] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1107 17:13:31.737297 165743 kubeadm.go:317] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1107 17:13:31.737366 165743 kubeadm.go:317] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1107 17:13:31.737432 165743 kubeadm.go:317] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1107 17:13:31.737511 165743 kubeadm.go:317] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
W1107 17:13:31.737713 165743 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1107 17:13:31.640965 6500 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
I1107 17:13:31.737760 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1107 17:13:32.054639 165743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1107 17:13:32.063813 165743 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1107 17:13:32.063875 165743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1107 17:13:32.070411 165743 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1107 17:13:32.070456 165743 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1107 17:13:32.107519 165743 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
I1107 17:13:32.107565 165743 kubeadm.go:317] [preflight] Running pre-flight checks
I1107 17:13:32.134497 165743 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
I1107 17:13:32.134580 165743 kubeadm.go:317] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
I1107 17:13:32.134633 165743 kubeadm.go:317] [0;37mOS[0m: [0;32mLinux[0m
I1107 17:13:32.134687 165743 kubeadm.go:317] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1107 17:13:32.134791 165743 kubeadm.go:317] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1107 17:13:32.134877 165743 kubeadm.go:317] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1107 17:13:32.134944 165743 kubeadm.go:317] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1107 17:13:32.135016 165743 kubeadm.go:317] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1107 17:13:32.135087 165743 kubeadm.go:317] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1107 17:13:32.135156 165743 kubeadm.go:317] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1107 17:13:32.135221 165743 kubeadm.go:317] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1107 17:13:32.135314 165743 kubeadm.go:317] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1107 17:13:32.196691 165743 kubeadm.go:317] W1107 17:13:32.102889 6771 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I1107 17:13:32.196897 165743 kubeadm.go:317] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
I1107 17:13:32.197035 165743 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1107 17:13:32.197117 165743 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
I1107 17:13:32.197155 165743 kubeadm.go:317] [ERROR Port-2379]: Port 2379 is in use
I1107 17:13:32.197197 165743 kubeadm.go:317] [ERROR Port-2380]: Port 2380 is in use
I1107 17:13:32.197292 165743 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
I1107 17:13:32.197352 165743 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
I1107 17:13:32.197439 165743 kubeadm.go:398] StartCluster complete in 4m28.987546075s
I1107 17:13:32.197484 165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1107 17:13:32.197525 165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1107 17:13:32.220007 165743 cri.go:87] found id: ""
I1107 17:13:32.220032 165743 logs.go:274] 0 containers: []
W1107 17:13:32.220040 165743 logs.go:276] No container was found matching "kube-apiserver"
I1107 17:13:32.220053 165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1107 17:13:32.220102 165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1107 17:13:32.242014 165743 cri.go:87] found id: ""
I1107 17:13:32.242043 165743 logs.go:274] 0 containers: []
W1107 17:13:32.242053 165743 logs.go:276] No container was found matching "etcd"
I1107 17:13:32.242066 165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1107 17:13:32.242112 165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1107 17:13:32.262942 165743 cri.go:87] found id: ""
I1107 17:13:32.262979 165743 logs.go:274] 0 containers: []
W1107 17:13:32.262988 165743 logs.go:276] No container was found matching "coredns"
I1107 17:13:32.262995 165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1107 17:13:32.263034 165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1107 17:13:32.284464 165743 cri.go:87] found id: ""
I1107 17:13:32.284488 165743 logs.go:274] 0 containers: []
W1107 17:13:32.284494 165743 logs.go:276] No container was found matching "kube-scheduler"
I1107 17:13:32.284501 165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1107 17:13:32.284552 165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1107 17:13:32.307214 165743 cri.go:87] found id: ""
I1107 17:13:32.307243 165743 logs.go:274] 0 containers: []
W1107 17:13:32.307252 165743 logs.go:276] No container was found matching "kube-proxy"
I1107 17:13:32.307260 165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1107 17:13:32.307310 165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1107 17:13:32.329151 165743 cri.go:87] found id: ""
I1107 17:13:32.329180 165743 logs.go:274] 0 containers: []
W1107 17:13:32.329196 165743 logs.go:276] No container was found matching "kubernetes-dashboard"
I1107 17:13:32.329205 165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1107 17:13:32.329257 165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1107 17:13:32.350599 165743 cri.go:87] found id: ""
I1107 17:13:32.350623 165743 logs.go:274] 0 containers: []
W1107 17:13:32.350629 165743 logs.go:276] No container was found matching "storage-provisioner"
I1107 17:13:32.350635 165743 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1107 17:13:32.350673 165743 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1107 17:13:32.372494 165743 cri.go:87] found id: ""
I1107 17:13:32.372522 165743 logs.go:274] 0 containers: []
W1107 17:13:32.372532 165743 logs.go:276] No container was found matching "kube-controller-manager"
I1107 17:13:32.372545 165743 logs.go:123] Gathering logs for kubelet ...
I1107 17:13:32.372558 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1107 17:13:32.435840 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.926231 4309 projected.go:192] Error preparing data for projected volume kube-api-access-l9w87 for pod kube-system/kindnet-fh9w9: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.436259 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.926837 4309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eca84e65-57b5-4cc9-b42a-0f991c91ffe7-kube-api-access-l9w87 podName:eca84e65-57b5-4cc9-b42a-0f991c91ffe7 nodeName:}" failed. No retries permitted until 2022-11-07 17:09:15.926808887 +0000 UTC m=+10.477882581 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l9w87" (UniqueName: "kubernetes.io/projected/eca84e65-57b5-4cc9-b42a-0f991c91ffe7-kube-api-access-l9w87") pod "kindnet-fh9w9" (UID: "eca84e65-57b5-4cc9-b42a-0f991c91ffe7") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.436411 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: W1107 17:09:13.927004 4309 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.436578 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927081 4309 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.436766 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927198 4309 projected.go:192] Error preparing data for projected volume kube-api-access-7jl9q for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.437177 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927299 4309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c43d0d64-f743-4627-894e-be6b8af2e64d-kube-api-access-7jl9q podName:c43d0d64-f743-4627-894e-be6b8af2e64d nodeName:}" failed. No retries permitted until 2022-11-07 17:09:15.927284243 +0000 UTC m=+10.478357937 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-7jl9q" (UniqueName: "kubernetes.io/projected/c43d0d64-f743-4627-894e-be6b8af2e64d-kube-api-access-7jl9q") pod "storage-provisioner" (UID: "c43d0d64-f743-4627-894e-be6b8af2e64d") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.437330 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: W1107 17:09:13.927404 4309 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.437497 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927466 4309 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.437684 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927560 4309 projected.go:192] Error preparing data for projected volume kube-api-access-6vv4c for pod kube-system/kube-proxy-lv445: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.438089 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927649 4309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fcbfbd08-498e-4a9c-8d36-0d45cbd312bd-kube-api-access-6vv4c podName:fcbfbd08-498e-4a9c-8d36-0d45cbd312bd nodeName:}" failed. No retries permitted until 2022-11-07 17:09:15.927635728 +0000 UTC m=+10.478709423 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6vv4c" (UniqueName: "kubernetes.io/projected/fcbfbd08-498e-4a9c-8d36-0d45cbd312bd-kube-api-access-6vv4c") pod "kube-proxy-lv445" (UID: "fcbfbd08-498e-4a9c-8d36-0d45cbd312bd") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.438269 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927751 4309 projected.go:192] Error preparing data for projected volume kube-api-access-qmxlx for pod kube-system/coredns-6d4b75cb6d-46n4z: failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.438700 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.927842 4309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0bb47afc-9c44-48b3-8dd4-966ed2608a7a-kube-api-access-qmxlx podName:0bb47afc-9c44-48b3-8dd4-966ed2608a7a nodeName:}" failed. No retries permitted until 2022-11-07 17:09:15.927829872 +0000 UTC m=+10.478903566 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qmxlx" (UniqueName: "kubernetes.io/projected/0bb47afc-9c44-48b3-8dd4-966ed2608a7a-kube-api-access-qmxlx") pod "coredns-6d4b75cb6d-46n4z" (UID: "0bb47afc-9c44-48b3-8dd4-966ed2608a7a") : failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.438846 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: W1107 17:09:13.927954 4309 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
W1107 17:13:32.439007 165743 logs.go:138] Found kubelet problem: Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.928028 4309 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
I1107 17:13:32.459618 165743 logs.go:123] Gathering logs for dmesg ...
I1107 17:13:32.459642 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1107 17:13:32.475496 165743 logs.go:123] Gathering logs for describe nodes ...
I1107 17:13:32.475522 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1107 17:13:32.524048 165743 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1107 17:13:32.524077 165743 logs.go:123] Gathering logs for containerd ...
I1107 17:13:32.524091 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1107 17:13:32.579264 165743 logs.go:123] Gathering logs for container status ...
I1107 17:13:32.579299 165743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1107 17:13:32.605796 165743 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1107 17:13:32.102889 6771 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W1107 17:13:32.605835 165743 out.go:239] *
W1107 17:13:32.605973 165743 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1107 17:13:32.102889 6771 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W1107 17:13:32.606006 165743 out.go:239] *
W1107 17:13:32.606836 165743 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1107 17:13:32.608746 165743 out.go:177] X Problems detected in kubelet:
I1107 17:13:32.610170 165743 out.go:177] Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.926231 4309 projected.go:192] Error preparing data for projected volume kube-api-access-l9w87 for pod kube-system/kindnet-fh9w9: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
I1107 17:13:32.612470 165743 out.go:177] Nov 07 17:09:13 test-preload-170735 kubelet[4309]: E1107 17:09:13.926837 4309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eca84e65-57b5-4cc9-b42a-0f991c91ffe7-kube-api-access-l9w87 podName:eca84e65-57b5-4cc9-b42a-0f991c91ffe7 nodeName:}" failed. No retries permitted until 2022-11-07 17:09:15.926808887 +0000 UTC m=+10.477882581 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l9w87" (UniqueName: "kubernetes.io/projected/eca84e65-57b5-4cc9-b42a-0f991c91ffe7-kube-api-access-l9w87") pod "kindnet-fh9w9" (UID: "eca84e65-57b5-4cc9-b42a-0f991c91ffe7") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-170735" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
I1107 17:13:32.614018 165743 out.go:177] Nov 07 17:09:13 test-preload-170735 kubelet[4309]: W1107 17:09:13.927004 4309 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-170735" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-170735' and this object
I1107 17:13:32.616027 165743 out.go:177]
W1107 17:13:32.617358 165743 out.go:239] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1107 17:13:32.102889 6771 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W1107 17:13:32.617464 165743 out.go:239] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
W1107 17:13:32.617526 165743 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/5484
I1107 17:13:32.619660 165743 out.go:177]
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
*
* ==> containerd <==
* -- Logs begin at Mon 2022-11-07 17:07:38 UTC, end at Mon 2022-11-07 17:13:33 UTC. --
Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.864870660Z" level=error msg="StopPodSandbox for \"\\\"Using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"\\\"Using\": not found"
Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.879687889Z" level=info msg="StopPodSandbox for \"this\""
Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.879728872Z" level=error msg="StopPodSandbox for \"this\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"this\": not found"
Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.894595969Z" level=info msg="StopPodSandbox for \"endpoint\""
Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.894640594Z" level=error msg="StopPodSandbox for \"endpoint\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint\": not found"
Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.909779827Z" level=info msg="StopPodSandbox for \"is\""
Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.909819766Z" level=error msg="StopPodSandbox for \"is\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"is\": not found"
Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.925069979Z" level=info msg="StopPodSandbox for \"deprecated,\""
Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.925123093Z" level=error msg="StopPodSandbox for \"deprecated,\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"deprecated,\": not found"
Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.940916581Z" level=info msg="StopPodSandbox for \"please\""
Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.940969746Z" level=error msg="StopPodSandbox for \"please\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"please\": not found"
Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.956375043Z" level=info msg="StopPodSandbox for \"consider\""
Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.956425277Z" level=error msg="StopPodSandbox for \"consider\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"consider\": not found"
Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.971528771Z" level=info msg="StopPodSandbox for \"using\""
Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.971574795Z" level=error msg="StopPodSandbox for \"using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"using\": not found"
Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.987107574Z" level=info msg="StopPodSandbox for \"full\""
Nov 07 17:13:31 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:31.987161603Z" level=error msg="StopPodSandbox for \"full\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"full\": not found"
Nov 07 17:13:32 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:32.002503858Z" level=info msg="StopPodSandbox for \"URL\""
Nov 07 17:13:32 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:32.002563853Z" level=error msg="StopPodSandbox for \"URL\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL\": not found"
Nov 07 17:13:32 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:32.017614591Z" level=info msg="StopPodSandbox for \"format\\\"\""
Nov 07 17:13:32 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:32.017655062Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
Nov 07 17:13:32 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:32.033595722Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
Nov 07 17:13:32 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:32.033644064Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
Nov 07 17:13:32 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:32.049862204Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
Nov 07 17:13:32 test-preload-170735 containerd[3003]: time="2022-11-07T17:13:32.049903989Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
*
* ==> describe nodes <==
*
* ==> dmesg <==
* [ +0.007365] FS-Cache: O-key=[8] '1ca20f0200000000'
[ +0.004971] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
[ +0.007955] FS-Cache: N-cookie d=00000000e1ebe1e0{9p.inode} n=00000000b53001db
[ +0.008740] FS-Cache: N-key=[8] '1ca20f0200000000'
[ +0.435035] FS-Cache: Duplicate cookie detected
[ +0.004685] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
[ +0.006792] FS-Cache: O-cookie d=00000000e1ebe1e0{9p.inode} n=0000000049910c82
[ +0.007358] FS-Cache: O-key=[8] '21a20f0200000000'
[ +0.004958] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
[ +0.006600] FS-Cache: N-cookie d=00000000e1ebe1e0{9p.inode} n=00000000b4cbcea0
[ +0.008738] FS-Cache: N-key=[8] '21a20f0200000000'
[Nov 7 16:53] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Nov 7 17:05] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-73d930ae71b0
[ +0.000012] ll header: 00000000: 02 42 e2 67 c1 53 02 42 c0 a8 3a 02 08 00
[ +1.024597] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-73d930ae71b0
[ +0.000006] ll header: 00000000: 02 42 e2 67 c1 53 02 42 c0 a8 3a 02 08 00
[ +2.011803] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-73d930ae71b0
[ +0.000030] ll header: 00000000: 02 42 e2 67 c1 53 02 42 c0 a8 3a 02 08 00
[ +4.223544] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-73d930ae71b0
[ +0.000031] ll header: 00000000: 02 42 e2 67 c1 53 02 42 c0 a8 3a 02 08 00
[Nov 7 17:06] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-73d930ae71b0
[ +0.000031] ll header: 00000000: 02 42 e2 67 c1 53 02 42 c0 a8 3a 02 08 00
[Nov 7 17:09] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000789] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.014707] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
*
* ==> kernel <==
* 17:13:33 up 2:56, 0 users, load average: 0.24, 0.59, 0.89
Linux test-preload-170735 5.15.0-1021-gcp #28~20.04.1-Ubuntu SMP Mon Oct 17 11:37:54 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"
*
* ==> kubelet <==
* -- Logs begin at Mon 2022-11-07 17:07:38 UTC, end at Mon 2022-11-07 17:13:33 UTC. --
Nov 07 17:11:55 test-preload-170735 kubelet[4309]: I1107 17:11:55.154694 4309 scope.go:110] "RemoveContainer" containerID="219f5216a4e8bc821bf33efb21542714b74cdc65a8ad7bc02582f4633cbd6da9"
Nov 07 17:11:55 test-preload-170735 kubelet[4309]: I1107 17:11:55.155028 4309 scope.go:110] "RemoveContainer" containerID="203dc2ae5376c33011729d048e58daa40a5f4e4b1a5f63c98256939ec60a760a"
Nov 07 17:11:55 test-preload-170735 kubelet[4309]: E1107 17:11:55.155418 4309 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-170735_kube-system(62ea0ae7f0dd287c41e3fc4d83f43bcc)\"" pod="kube-system/etcd-test-preload-170735" podUID=62ea0ae7f0dd287c41e3fc4d83f43bcc
Nov 07 17:11:56 test-preload-170735 kubelet[4309]: I1107 17:11:56.538873 4309 scope.go:110] "RemoveContainer" containerID="203dc2ae5376c33011729d048e58daa40a5f4e4b1a5f63c98256939ec60a760a"
Nov 07 17:11:56 test-preload-170735 kubelet[4309]: E1107 17:11:56.539298 4309 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-170735_kube-system(62ea0ae7f0dd287c41e3fc4d83f43bcc)\"" pod="kube-system/etcd-test-preload-170735" podUID=62ea0ae7f0dd287c41e3fc4d83f43bcc
Nov 07 17:11:57 test-preload-170735 kubelet[4309]: I1107 17:11:57.160726 4309 scope.go:110] "RemoveContainer" containerID="203dc2ae5376c33011729d048e58daa40a5f4e4b1a5f63c98256939ec60a760a"
Nov 07 17:11:57 test-preload-170735 kubelet[4309]: E1107 17:11:57.161056 4309 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-170735_kube-system(62ea0ae7f0dd287c41e3fc4d83f43bcc)\"" pod="kube-system/etcd-test-preload-170735" podUID=62ea0ae7f0dd287c41e3fc4d83f43bcc
Nov 07 17:11:58 test-preload-170735 kubelet[4309]: I1107 17:11:58.162285 4309 scope.go:110] "RemoveContainer" containerID="203dc2ae5376c33011729d048e58daa40a5f4e4b1a5f63c98256939ec60a760a"
Nov 07 17:11:58 test-preload-170735 kubelet[4309]: E1107 17:11:58.162639 4309 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-170735_kube-system(62ea0ae7f0dd287c41e3fc4d83f43bcc)\"" pod="kube-system/etcd-test-preload-170735" podUID=62ea0ae7f0dd287c41e3fc4d83f43bcc
Nov 07 17:12:11 test-preload-170735 kubelet[4309]: I1107 17:12:11.703510 4309 scope.go:110] "RemoveContainer" containerID="203dc2ae5376c33011729d048e58daa40a5f4e4b1a5f63c98256939ec60a760a"
Nov 07 17:12:11 test-preload-170735 kubelet[4309]: E1107 17:12:11.703871 4309 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-170735_kube-system(62ea0ae7f0dd287c41e3fc4d83f43bcc)\"" pod="kube-system/etcd-test-preload-170735" podUID=62ea0ae7f0dd287c41e3fc4d83f43bcc
Nov 07 17:12:25 test-preload-170735 kubelet[4309]: I1107 17:12:25.704094 4309 scope.go:110] "RemoveContainer" containerID="203dc2ae5376c33011729d048e58daa40a5f4e4b1a5f63c98256939ec60a760a"
Nov 07 17:12:25 test-preload-170735 kubelet[4309]: E1107 17:12:25.704442 4309 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-170735_kube-system(62ea0ae7f0dd287c41e3fc4d83f43bcc)\"" pod="kube-system/etcd-test-preload-170735" podUID=62ea0ae7f0dd287c41e3fc4d83f43bcc
Nov 07 17:12:40 test-preload-170735 kubelet[4309]: I1107 17:12:40.703609 4309 scope.go:110] "RemoveContainer" containerID="203dc2ae5376c33011729d048e58daa40a5f4e4b1a5f63c98256939ec60a760a"
Nov 07 17:12:40 test-preload-170735 kubelet[4309]: E1107 17:12:40.703993 4309 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-170735_kube-system(62ea0ae7f0dd287c41e3fc4d83f43bcc)\"" pod="kube-system/etcd-test-preload-170735" podUID=62ea0ae7f0dd287c41e3fc4d83f43bcc
Nov 07 17:12:54 test-preload-170735 kubelet[4309]: I1107 17:12:54.703818 4309 scope.go:110] "RemoveContainer" containerID="203dc2ae5376c33011729d048e58daa40a5f4e4b1a5f63c98256939ec60a760a"
Nov 07 17:12:54 test-preload-170735 kubelet[4309]: E1107 17:12:54.704169 4309 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-170735_kube-system(62ea0ae7f0dd287c41e3fc4d83f43bcc)\"" pod="kube-system/etcd-test-preload-170735" podUID=62ea0ae7f0dd287c41e3fc4d83f43bcc
Nov 07 17:13:07 test-preload-170735 kubelet[4309]: I1107 17:13:07.703611 4309 scope.go:110] "RemoveContainer" containerID="203dc2ae5376c33011729d048e58daa40a5f4e4b1a5f63c98256939ec60a760a"
Nov 07 17:13:07 test-preload-170735 kubelet[4309]: E1107 17:13:07.703938 4309 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-170735_kube-system(62ea0ae7f0dd287c41e3fc4d83f43bcc)\"" pod="kube-system/etcd-test-preload-170735" podUID=62ea0ae7f0dd287c41e3fc4d83f43bcc
Nov 07 17:13:22 test-preload-170735 kubelet[4309]: I1107 17:13:22.703867 4309 scope.go:110] "RemoveContainer" containerID="203dc2ae5376c33011729d048e58daa40a5f4e4b1a5f63c98256939ec60a760a"
Nov 07 17:13:22 test-preload-170735 kubelet[4309]: E1107 17:13:22.704422 4309 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-170735_kube-system(62ea0ae7f0dd287c41e3fc4d83f43bcc)\"" pod="kube-system/etcd-test-preload-170735" podUID=62ea0ae7f0dd287c41e3fc4d83f43bcc
Nov 07 17:13:30 test-preload-170735 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
Nov 07 17:13:30 test-preload-170735 kubelet[4309]: I1107 17:13:30.025109 4309 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
Nov 07 17:13:30 test-preload-170735 systemd[1]: kubelet.service: Succeeded.
Nov 07 17:13:30 test-preload-170735 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
-- /stdout --
** stderr **
E1107 17:13:33.640482 170436 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
! unable to fetch logs for: describe nodes
** /stderr **
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-170735 -n test-preload-170735
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-170735 -n test-preload-170735: exit status 2 (339.575218ms)
-- stdout --
Stopped
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "test-preload-170735" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-170735" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p test-preload-170735
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-170735: (1.996488777s)
--- FAIL: TestPreload (360.35s)