=== RUN TestPreload
preload_test.go:44: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-165950 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.24.4
E1031 17:00:37.107419 10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-165950 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.24.4: (1m1.343246526s)
preload_test.go:57: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-165950 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-165950 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.803309779s)
preload_test.go:67: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-165950 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --container-runtime=containerd --kubernetes-version=v1.24.6
E1031 17:00:53.550006 10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/ingress-addon-legacy-164433/client.crt: no such file or directory
E1031 17:03:41.813120 10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
E1031 17:04:14.061215 10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/addons-163622/client.crt: no such file or directory
E1031 17:05:04.857712 10097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/functional-164150/client.crt: no such file or directory
preload_test.go:67: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p test-preload-165950 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --container-runtime=containerd --kubernetes-version=v1.24.6: exit status 81 (4m52.69353006s)
-- stdout --
* [test-preload-165950] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=15232
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
* Using the docker driver based on existing profile
* Starting control plane node test-preload-165950 in cluster test-preload-165950
* Pulling base image ...
* Downloading Kubernetes v1.24.6 preload ...
* Updating the running docker "test-preload-165950" container ...
* Preparing Kubernetes v1.24.6 on containerd 1.6.9 ...
* Configuring CNI (Container Networking Interface) ...
X Problems detected in kubelet:
Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661461 4266 projected.go:192] Error preparing data for projected volume kube-api-access-8mn6l for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661580 4266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5031015c-081e-49e2-8d46-09fd879a755c-kube-api-access-8mn6l podName:5031015c-081e-49e2-8d46-09fd879a755c nodeName:}" failed. No retries permitted until 2022-10-31 17:01:36.661550988 +0000 UTC m=+9.136031253 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8mn6l" (UniqueName: "kubernetes.io/projected/5031015c-081e-49e2-8d46-09fd879a755c-kube-api-access-8mn6l") pod "storage-provisioner" (UID: "5031015c-081e-49e2-8d46-09fd879a755c") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
Oct 31 17:01:35 test-preload-165950 kubelet[4266]: W1031 17:01:35.661699 4266 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
-- /stdout --
** stderr **
I1031 17:00:53.400798 123788 out.go:296] Setting OutFile to fd 1 ...
I1031 17:00:53.400923 123788 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 17:00:53.400937 123788 out.go:309] Setting ErrFile to fd 2...
I1031 17:00:53.400944 123788 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 17:00:53.401087 123788 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-3650/.minikube/bin
I1031 17:00:53.401650 123788 out.go:303] Setting JSON to false
I1031 17:00:53.402675 123788 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2603,"bootTime":1667233050,"procs":288,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1031 17:00:53.402746 123788 start.go:126] virtualization: kvm guest
I1031 17:00:53.405697 123788 out.go:177] * [test-preload-165950] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
I1031 17:00:53.407231 123788 out.go:177] - MINIKUBE_LOCATION=15232
I1031 17:00:53.407135 123788 notify.go:220] Checking for updates...
I1031 17:00:53.411021 123788 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1031 17:00:53.412510 123788 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig
I1031 17:00:53.414023 123788 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube
I1031 17:00:53.415484 123788 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1031 17:00:53.417194 123788 config.go:180] Loaded profile config "test-preload-165950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I1031 17:00:53.419061 123788 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
I1031 17:00:53.420384 123788 driver.go:365] Setting default libvirt URI to qemu:///system
I1031 17:00:53.448510 123788 docker.go:137] docker version: linux-20.10.21
I1031 17:00:53.448586 123788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1031 17:00:53.541306 123788 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:39 SystemTime:2022-10-31 17:00:53.467933423 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1031 17:00:53.541406 123788 docker.go:254] overlay module found
I1031 17:00:53.543484 123788 out.go:177] * Using the docker driver based on existing profile
I1031 17:00:53.544875 123788 start.go:282] selected driver: docker
I1031 17:00:53.544894 123788 start.go:808] validating driver "docker" against &{Name:test-preload-165950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-165950 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1031 17:00:53.544985 123788 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1031 17:00:53.545708 123788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1031 17:00:53.643264 123788 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:39 SystemTime:2022-10-31 17:00:53.565995365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1031 17:00:53.643528 123788 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1031 17:00:53.643548 123788 cni.go:95] Creating CNI manager for ""
I1031 17:00:53.643554 123788 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I1031 17:00:53.643565 123788 start_flags.go:317] config:
{Name:test-preload-165950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-165950 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1031 17:00:53.645909 123788 out.go:177] * Starting control plane node test-preload-165950 in cluster test-preload-165950
I1031 17:00:53.647496 123788 cache.go:120] Beginning downloading kic base image for docker with containerd
I1031 17:00:53.648990 123788 out.go:177] * Pulling base image ...
I1031 17:00:53.650498 123788 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1031 17:00:53.650525 123788 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
I1031 17:00:53.672685 123788 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
I1031 17:00:53.672711 123788 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
I1031 17:00:53.749918 123788 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
I1031 17:00:53.750010 123788 cache.go:57] Caching tarball of preloaded images
I1031 17:00:53.750392 123788 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1031 17:00:53.752786 123788 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
I1031 17:00:53.754251 123788 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1031 17:00:53.854172 123788 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
I1031 17:00:56.444223 123788 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1031 17:00:56.444331 123788 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1031 17:00:57.333820 123788 cache.go:60] Finished verifying existence of preloaded tar for v1.24.6 on containerd
I1031 17:00:57.333953 123788 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/config.json ...
I1031 17:00:57.334153 123788 cache.go:208] Successfully downloaded all kic artifacts
I1031 17:00:57.334182 123788 start.go:364] acquiring machines lock for test-preload-165950: {Name:mk5e2148763cdda5260ddcfe6c84de7081b8765d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1031 17:00:57.334270 123788 start.go:368] acquired machines lock for "test-preload-165950" in 68.35µs
I1031 17:00:57.334286 123788 start.go:96] Skipping create...Using existing machine configuration
I1031 17:00:57.334291 123788 fix.go:55] fixHost starting:
I1031 17:00:57.334493 123788 cli_runner.go:164] Run: docker container inspect test-preload-165950 --format={{.State.Status}}
I1031 17:00:57.357514 123788 fix.go:103] recreateIfNeeded on test-preload-165950: state=Running err=<nil>
W1031 17:00:57.357546 123788 fix.go:129] unexpected machine state, will restart: <nil>
I1031 17:00:57.360746 123788 out.go:177] * Updating the running docker "test-preload-165950" container ...
I1031 17:00:57.362040 123788 machine.go:88] provisioning docker machine ...
I1031 17:00:57.362068 123788 ubuntu.go:169] provisioning hostname "test-preload-165950"
I1031 17:00:57.362115 123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
I1031 17:00:57.384936 123788 main.go:134] libmachine: Using SSH client type: native
I1031 17:00:57.385100 123788 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49277 <nil> <nil>}
I1031 17:00:57.385117 123788 main.go:134] libmachine: About to run SSH command:
sudo hostname test-preload-165950 && echo "test-preload-165950" | sudo tee /etc/hostname
I1031 17:00:57.508480 123788 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-165950
I1031 17:00:57.508560 123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
I1031 17:00:57.532320 123788 main.go:134] libmachine: Using SSH client type: native
I1031 17:00:57.532481 123788 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49277 <nil> <nil>}
I1031 17:00:57.532510 123788 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\stest-preload-165950' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-165950/g' /etc/hosts;
else
echo '127.0.1.1 test-preload-165950' | sudo tee -a /etc/hosts;
fi
fi
I1031 17:00:57.648181 123788 main.go:134] libmachine: SSH cmd err, output: <nil>:
I1031 17:00:57.648212 123788 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15232-3650/.minikube CaCertPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15232-3650/.minikube}
I1031 17:00:57.648234 123788 ubuntu.go:177] setting up certificates
I1031 17:00:57.648244 123788 provision.go:83] configureAuth start
I1031 17:00:57.648321 123788 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-165950
I1031 17:00:57.672013 123788 provision.go:138] copyHostCerts
I1031 17:00:57.672105 123788 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3650/.minikube/ca.pem, removing ...
I1031 17:00:57.672125 123788 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3650/.minikube/ca.pem
I1031 17:00:57.672195 123788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15232-3650/.minikube/ca.pem (1078 bytes)
I1031 17:00:57.672283 123788 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3650/.minikube/cert.pem, removing ...
I1031 17:00:57.672295 123788 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3650/.minikube/cert.pem
I1031 17:00:57.672323 123788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15232-3650/.minikube/cert.pem (1123 bytes)
I1031 17:00:57.672372 123788 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3650/.minikube/key.pem, removing ...
I1031 17:00:57.672381 123788 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3650/.minikube/key.pem
I1031 17:00:57.672407 123788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15232-3650/.minikube/key.pem (1679 bytes)
I1031 17:00:57.672455 123788 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15232-3650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca-key.pem org=jenkins.test-preload-165950 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-165950]
I1031 17:00:57.797650 123788 provision.go:172] copyRemoteCerts
I1031 17:00:57.797711 123788 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1031 17:00:57.797742 123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
I1031 17:00:57.822580 123788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/test-preload-165950/id_rsa Username:docker}
I1031 17:00:57.907487 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1031 17:00:57.925574 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I1031 17:00:57.945093 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1031 17:00:57.962901 123788 provision.go:86] duration metric: configureAuth took 314.615745ms
I1031 17:00:57.962927 123788 ubuntu.go:193] setting minikube options for container-runtime
I1031 17:00:57.963104 123788 config.go:180] Loaded profile config "test-preload-165950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
I1031 17:00:57.963117 123788 machine.go:91] provisioned docker machine in 601.061986ms
I1031 17:00:57.963123 123788 start.go:300] post-start starting for "test-preload-165950" (driver="docker")
I1031 17:00:57.963131 123788 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1031 17:00:57.963167 123788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1031 17:00:57.963199 123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
I1031 17:00:57.987686 123788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/test-preload-165950/id_rsa Username:docker}
I1031 17:00:58.071508 123788 ssh_runner.go:195] Run: cat /etc/os-release
I1031 17:00:58.074511 123788 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1031 17:00:58.074535 123788 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1031 17:00:58.074543 123788 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1031 17:00:58.074549 123788 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I1031 17:00:58.074562 123788 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-3650/.minikube/addons for local assets ...
I1031 17:00:58.074617 123788 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-3650/.minikube/files for local assets ...
I1031 17:00:58.074698 123788 filesync.go:149] local asset: /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem -> 100972.pem in /etc/ssl/certs
I1031 17:00:58.074797 123788 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1031 17:00:58.082460 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem --> /etc/ssl/certs/100972.pem (1708 bytes)
I1031 17:00:58.099618 123788 start.go:303] post-start completed in 136.482468ms
I1031 17:00:58.099687 123788 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1031 17:00:58.099718 123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
I1031 17:00:58.122912 123788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/test-preload-165950/id_rsa Username:docker}
I1031 17:00:58.204709 123788 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1031 17:00:58.208921 123788 fix.go:57] fixHost completed within 874.623341ms
I1031 17:00:58.208952 123788 start.go:83] releasing machines lock for "test-preload-165950", held for 874.669884ms
I1031 17:00:58.209045 123788 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-165950
I1031 17:00:58.231368 123788 ssh_runner.go:195] Run: systemctl --version
I1031 17:00:58.231411 123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
I1031 17:00:58.231475 123788 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I1031 17:00:58.231537 123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
I1031 17:00:58.254909 123788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/test-preload-165950/id_rsa Username:docker}
I1031 17:00:58.256772 123788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/test-preload-165950/id_rsa Username:docker}
I1031 17:00:58.359932 123788 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1031 17:00:58.370867 123788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1031 17:00:58.380533 123788 docker.go:189] disabling docker service ...
I1031 17:00:58.380587 123788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1031 17:00:58.390611 123788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1031 17:00:58.400540 123788 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1031 17:00:58.503571 123788 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1031 17:00:58.601357 123788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1031 17:00:58.610768 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1031 17:00:58.623982 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
I1031 17:00:58.631971 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
I1031 17:00:58.639948 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
I1031 17:00:58.647731 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
I1031 17:00:58.655857 123788 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1031 17:00:58.662159 123788 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1031 17:00:58.668160 123788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1031 17:00:58.765634 123788 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1031 17:00:58.838270 123788 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
I1031 17:00:58.838340 123788 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1031 17:00:58.842645 123788 start.go:472] Will wait 60s for crictl version
I1031 17:00:58.842710 123788 ssh_runner.go:195] Run: sudo crictl version
I1031 17:00:58.873990 123788 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:
stderr:
time="2022-10-31T17:00:58Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
I1031 17:01:09.921926 123788 ssh_runner.go:195] Run: sudo crictl version
I1031 17:01:09.945289 123788 start.go:481] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.6.9
RuntimeApiVersion: v1alpha2
I1031 17:01:09.945349 123788 ssh_runner.go:195] Run: containerd --version
I1031 17:01:09.970198 123788 ssh_runner.go:195] Run: containerd --version
I1031 17:01:09.996976 123788 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.9 ...
I1031 17:01:09.998646 123788 cli_runner.go:164] Run: docker network inspect test-preload-165950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1031 17:01:10.021855 123788 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I1031 17:01:10.025738 123788 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1031 17:01:10.025795 123788 ssh_runner.go:195] Run: sudo crictl images --output json
I1031 17:01:10.050811 123788 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
I1031 17:01:10.050875 123788 ssh_runner.go:195] Run: which lz4
I1031 17:01:10.053855 123788 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1031 17:01:10.056765 123788 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I1031 17:01:10.056789 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
I1031 17:01:11.012204 123788 containerd.go:496] Took 0.958385 seconds to copy over tarball
I1031 17:01:11.012279 123788 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I1031 17:01:13.898440 123788 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.886126931s)
I1031 17:01:13.898474 123788 containerd.go:503] Took 2.886238 seconds t extract the tarball
I1031 17:01:13.898485 123788 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1031 17:01:13.924871 123788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1031 17:01:14.027291 123788 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1031 17:01:14.105585 123788 ssh_runner.go:195] Run: sudo crictl images --output json
I1031 17:01:14.153742 123788 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.6 k8s.gcr.io/kube-controller-manager:v1.24.6 k8s.gcr.io/kube-scheduler:v1.24.6 k8s.gcr.io/kube-proxy:v1.24.6 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
I1031 17:01:14.153832 123788 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I1031 17:01:14.153879 123788 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.6
I1031 17:01:14.153933 123788 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.6
I1031 17:01:14.153950 123788 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
I1031 17:01:14.153997 123788 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.6
I1031 17:01:14.154093 123788 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
I1031 17:01:14.154143 123788 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
I1031 17:01:14.154158 123788 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.6
I1031 17:01:14.154858 123788 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.6: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.6
I1031 17:01:14.154930 123788 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.6: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.6
I1031 17:01:14.155027 123788 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
I1031 17:01:14.155037 123788 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
I1031 17:01:14.155035 123788 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I1031 17:01:14.155041 123788 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
I1031 17:01:14.154859 123788 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.6: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.6
I1031 17:01:14.155056 123788 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.6: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.6
I1031 17:01:14.639297 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
I1031 17:01:14.649105 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6"
I1031 17:01:14.661797 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
I1031 17:01:14.676815 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
I1031 17:01:14.688769 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.6"
I1031 17:01:14.693655 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.6"
I1031 17:01:14.714906 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6"
I1031 17:01:14.949489 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
I1031 17:01:15.471396 123788 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
I1031 17:01:15.471444 123788 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
I1031 17:01:15.471487 123788 ssh_runner.go:195] Run: which crictl
I1031 17:01:15.667668 123788 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6": (1.005826513s)
I1031 17:01:15.667922 123788 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
I1031 17:01:15.667990 123788 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
I1031 17:01:15.668043 123788 ssh_runner.go:195] Run: which crictl
I1031 17:01:15.667834 123788 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
I1031 17:01:15.668185 123788 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
I1031 17:01:15.668229 123788 ssh_runner.go:195] Run: which crictl
I1031 17:01:15.667889 123788 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6": (1.018754573s)
I1031 17:01:15.668329 123788 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.6" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.6" does not exist at hash "c786c777a4e1c21907e77042428837645fa382d3bd14925cf78f0d163d6d332e" in container runtime
I1031 17:01:15.668357 123788 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.6
I1031 17:01:15.668378 123788 ssh_runner.go:195] Run: which crictl
I1031 17:01:15.675016 123788 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.6" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.6" does not exist at hash "860f263331c9513ddab44d4d8a9a4a7304313b3aa0776decc1d7fc6acdd69ab0" in container runtime
I1031 17:01:15.675057 123788 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.6
I1031 17:01:15.675083 123788 ssh_runner.go:195] Run: which crictl
I1031 17:01:15.748343 123788 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.6": (1.05465106s)
I1031 17:01:15.748403 123788 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.6" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.6" does not exist at hash "0bb39497ab33bb5f8aaff88ced53a5fcd360fcf5da647609619d4f5c8f1483d2" in container runtime
I1031 17:01:15.748433 123788 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.6
I1031 17:01:15.748479 123788 ssh_runner.go:195] Run: which crictl
I1031 17:01:15.773417 123788 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6": (1.058470688s)
I1031 17:01:15.773475 123788 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.6" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.6" does not exist at hash "c6c20157a42337ecb7675be59e1dc34bc5a91288c7eeac1e30ec97767a9055eb" in container runtime
I1031 17:01:15.773543 123788 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.6
I1031 17:01:15.773610 123788 ssh_runner.go:195] Run: which crictl
I1031 17:01:15.796393 123788 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
I1031 17:01:15.796447 123788 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I1031 17:01:15.796450 123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
I1031 17:01:15.796474 123788 ssh_runner.go:195] Run: which crictl
I1031 17:01:15.796543 123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
I1031 17:01:15.796574 123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6
I1031 17:01:15.796615 123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
I1031 17:01:15.796661 123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6
I1031 17:01:15.796762 123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6
I1031 17:01:15.796793 123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6
I1031 17:01:15.849303 123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1031 17:01:16.518326 123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
I1031 17:01:16.518410 123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6
I1031 17:01:16.518448 123788 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
I1031 17:01:16.518466 123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
I1031 17:01:16.518546 123788 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
I1031 17:01:16.518609 123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
I1031 17:01:16.518661 123788 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
I1031 17:01:16.518667 123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6
I1031 17:01:16.519958 123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6
I1031 17:01:16.520022 123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6
I1031 17:01:16.520164 123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
I1031 17:01:16.520245 123788 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
I1031 17:01:16.522338 123788 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
I1031 17:01:16.522367 123788 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
I1031 17:01:16.522400 123788 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
I1031 17:01:16.522738 123788 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
I1031 17:01:16.522918 123788 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
I1031 17:01:16.523532 123788 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
I1031 17:01:23.289265 123788 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (6.766830961s)
I1031 17:01:23.289325 123788 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
I1031 17:01:23.289354 123788 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.8.6
I1031 17:01:23.289408 123788 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
I1031 17:01:24.806710 123788 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6: (1.517273083s)
I1031 17:01:24.806742 123788 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
I1031 17:01:24.806797 123788 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.7
I1031 17:01:24.806862 123788 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
I1031 17:01:24.985051 123788 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
I1031 17:01:24.985104 123788 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I1031 17:01:24.985171 123788 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
I1031 17:01:25.471171 123788 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I1031 17:01:25.471237 123788 cache_images.go:92] LoadImages completed in 11.317456964s
W1031 17:01:25.471403 123788 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6: no such file or directory
X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6: no such file or directory
I1031 17:01:25.471469 123788 ssh_runner.go:195] Run: sudo crictl info
I1031 17:01:25.549548 123788 cni.go:95] Creating CNI manager for ""
I1031 17:01:25.549585 123788 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I1031 17:01:25.549601 123788 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1031 17:01:25.549618 123788 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-165950 NodeName:test-preload-165950 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I1031 17:01:25.549786 123788 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "test-preload-165950"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1031 17:01:25.549897 123788 kubeadm.go:962] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-165950 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.6 ClusterName:test-preload-165950 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1031 17:01:25.549966 123788 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
I1031 17:01:25.559048 123788 binaries.go:44] Found k8s binaries, skipping transfer
I1031 17:01:25.559118 123788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1031 17:01:25.568146 123788 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (512 bytes)
I1031 17:01:25.583110 123788 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1031 17:01:25.598681 123788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
I1031 17:01:25.662413 123788 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I1031 17:01:25.666268 123788 certs.go:54] Setting up /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950 for IP: 192.168.67.2
I1031 17:01:25.666403 123788 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15232-3650/.minikube/ca.key
I1031 17:01:25.666458 123788 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15232-3650/.minikube/proxy-client-ca.key
I1031 17:01:25.666558 123788 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/client.key
I1031 17:01:25.666633 123788 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/apiserver.key.c7fa3a9e
I1031 17:01:25.666689 123788 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/proxy-client.key
I1031 17:01:25.666801 123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/10097.pem (1338 bytes)
W1031 17:01:25.666847 123788 certs.go:384] ignoring /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/10097_empty.pem, impossibly tiny 0 bytes
I1031 17:01:25.666873 123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca-key.pem (1679 bytes)
I1031 17:01:25.666908 123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem (1078 bytes)
I1031 17:01:25.666943 123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/cert.pem (1123 bytes)
I1031 17:01:25.666974 123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/key.pem (1679 bytes)
I1031 17:01:25.667033 123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem (1708 bytes)
I1031 17:01:25.667673 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1031 17:01:25.690455 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1031 17:01:25.763539 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1031 17:01:25.790140 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1031 17:01:25.861083 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1031 17:01:25.879599 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1031 17:01:25.898515 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1031 17:01:25.922119 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1031 17:01:25.959078 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem --> /usr/share/ca-certificates/100972.pem (1708 bytes)
I1031 17:01:25.980032 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1031 17:01:26.000424 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/certs/10097.pem --> /usr/share/ca-certificates/10097.pem (1338 bytes)
I1031 17:01:26.053381 123788 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1031 17:01:26.067535 123788 ssh_runner.go:195] Run: openssl version
I1031 17:01:26.072627 123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1031 17:01:26.080989 123788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1031 17:01:26.085427 123788 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 31 16:37 /usr/share/ca-certificates/minikubeCA.pem
I1031 17:01:26.085503 123788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1031 17:01:26.091369 123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1031 17:01:26.099802 123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10097.pem && ln -fs /usr/share/ca-certificates/10097.pem /etc/ssl/certs/10097.pem"
I1031 17:01:26.108642 123788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10097.pem
I1031 17:01:26.112303 123788 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 31 16:41 /usr/share/ca-certificates/10097.pem
I1031 17:01:26.112374 123788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10097.pem
I1031 17:01:26.125705 123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10097.pem /etc/ssl/certs/51391683.0"
I1031 17:01:26.133946 123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100972.pem && ln -fs /usr/share/ca-certificates/100972.pem /etc/ssl/certs/100972.pem"
I1031 17:01:26.142159 123788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100972.pem
I1031 17:01:26.145685 123788 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 31 16:41 /usr/share/ca-certificates/100972.pem
I1031 17:01:26.145748 123788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100972.pem
I1031 17:01:26.150967 123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100972.pem /etc/ssl/certs/3ec20f2e.0"
I1031 17:01:26.158917 123788 kubeadm.go:396] StartCluster: {Name:test-preload-165950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-165950 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1031 17:01:26.159010 123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1031 17:01:26.159074 123788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1031 17:01:26.185271 123788 cri.go:87] found id: "9b493051380ea5f84db2bf6d6b500816b4bfc7d73549a3fc267337671408794c"
I1031 17:01:26.185298 123788 cri.go:87] found id: "9523dbf74df3ff703859059525cf2e837089463bffc76ca75ed4636d64233fbf"
I1031 17:01:26.185306 123788 cri.go:87] found id: "92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e"
I1031 17:01:26.185314 123788 cri.go:87] found id: ""
I1031 17:01:26.185368 123788 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I1031 17:01:26.219799 123788 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"08af09ff6dc7343fadd5f527821607e0a139864f2cf045f41ddb8a637dd3684e","pid":2647,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08af09ff6dc7343fadd5f527821607e0a139864f2cf045f41ddb8a637dd3684e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08af09ff6dc7343fadd5f527821607e0a139864f2cf045f41ddb8a637dd3684e/rootfs","created":"2022-10-31T17:00:48.864140497Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-8wsrc","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489","pid":2192,"status":"running",
"bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489/rootfs","created":"2022-10-31T17:00:31.051060479Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-54b5q_0ff95637-a367-440b-918f-495391f2f1cf","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-54b5q","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0e1033e5cdbc18bbf5d5b9fb465ba7904c14d9cf096e385708e56a44984ea823","pid":1627,"status":"running","bundle":"/ru
n/containerd/io.containerd.runtime.v2.task/k8s.io/0e1033e5cdbc18bbf5d5b9fb465ba7904c14d9cf096e385708e56a44984ea823","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e1033e5cdbc18bbf5d5b9fb465ba7904c14d9cf096e385708e56a44984ea823/rootfs","created":"2022-10-31T17:00:11.805153802Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri.sandbox-id":"bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865","io.kubernetes.cri.sandbox-name":"etcd-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0f97f1cbba7302aeb3085b591e2b35bc859465b29b1dbeeabec247e6d5bae53f","pid":1649,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f97f1cbba7302aeb3085b591e2b35bc859465b29b1dbeeabec247e6d5bae53f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f97f1cbba7302aeb3085
b591e2b35bc859465b29b1dbeeabec247e6d5bae53f/rootfs","created":"2022-10-31T17:00:11.813683549Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-apiserver:v1.24.4","io.kubernetes.cri.sandbox-id":"ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322","pid":3587,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322/rootfs","created":"2022-10-31T17:01:17.561189062Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.
cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-165950_745aa6453df7e4d7a2bedb8ef855e2b8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95","pid":3592,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95/rootfs","created":"2022-10-31T17:01:17.563577882Z","annotations":{"io.kubernetes.cri.container-type":"san
dbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-54b5q_0ff95637-a367-440b-918f-495391f2f1cf","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-54b5q","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4","pid":3582,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4/rootfs","created":"2022-10-31T17:01:17.563663092Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kub
ernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-8wsrc_8e76d465-ae9a-4121-b7ed-1ef94dd20b7e","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-8wsrc","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4c13d0592c58fca24c43401e993684c5570ddc76b74d030729c3dcc469b40b53","pid":2448,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c13d0592c58fca24c43401e993684c5570ddc76b74d030729c3dcc469b40b53","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c13d0592c58fca24c43401e993684c5570ddc76b74d030729c3dcc469b40b53/rootfs","created":"2022-10-31T17:00:34.399637747Z","annotations":{"io.kubernetes.cri.container-name":"kin
dnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20221004-44d545d1","io.kubernetes.cri.sandbox-id":"e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4","io.kubernetes.cri.sandbox-name":"kindnet-jljff","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2","pid":1514,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2/rootfs","created":"2022-10-31T17:00:11.599833796Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"534d5230b
843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-165950_f7f285bbceeae66435f07854fddd011c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"715100db6ef995efee732d28990297ac96df6f601c86ae90c8ca69f54dc02d49","pid":1640,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/715100db6ef995efee732d28990297ac96df6f601c86ae90c8ca69f54dc02d49","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/715100db6ef995efee732d28990297ac96df6f601c86ae90c8ca69f54dc02d49/rootfs","created":"2022-10-31T17:00:11.816128062Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-controller-manager:v1.24.4
","io.kubernetes.cri.sandbox-id":"534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f","pid":3586,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f/rootfs","created":"2022-10-31T17:01:17.562640113Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f","io.kubernetes.cri.sandbox-log-directory":"
/var/log/pods/kube-system_kindnet-jljff_e66c31a9-8e36-4914-a086-32ba2b3dc004","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-jljff","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1","pid":2589,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1/rootfs","created":"2022-10-31T17:00:48.750343759Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pod
s/kube-system_storage-provisioner_5031015c-081e-49e2-8d46-09fd879a755c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"91216b43bb46df97ee8f7081d129445a14177f38a7784ca329cb279b5be6d0bf","pid":2648,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/91216b43bb46df97ee8f7081d129445a14177f38a7784ca329cb279b5be6d0bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/91216b43bb46df97ee8f7081d129445a14177f38a7784ca329cb279b5be6d0bf/rootfs","created":"2022-10-31T17:00:48.864144417Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.k
ubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e","pid":3774,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e/rootfs","created":"2022-10-31T17:01:22.960016255Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383","pid":151
2,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383/rootfs","created":"2022-10-31T17:00:11.597466189Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-165950_745aa6453df7e4d7a2bedb8ef855e2b8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2
c9ac6a5383","pid":3588,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2c9ac6a5383","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2c9ac6a5383/rootfs","created":"2022-10-31T17:01:17.560848225Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2c9ac6a5383","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-165950_8a2a3eb7a75eb7f169392f7d77b36d78","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb1
5588717a8a9","pid":1513,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9/rootfs","created":"2022-10-31T17:00:11.599047477Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-165950_f04a99c5aa78b1fe8d30a6291f8f68f1","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bd161590496d96dfd772253e8fc04aa2
ace241cd015a3e030edb9980f0002865","pid":1515,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865/rootfs","created":"2022-10-31T17:00:11.599316909Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-165950_8a2a3eb7a75eb7f169392f7d77b36d78","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c1d26ec1a24e08c41b4eed6cd4a281a
528dd2a96323f389584c153ebdccd783f","pid":3487,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1d26ec1a24e08c41b4eed6cd4a281a528dd2a96323f389584c153ebdccd783f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1d26ec1a24e08c41b4eed6cd4a281a528dd2a96323f389584c153ebdccd783f/rootfs","created":"2022-10-31T17:01:17.250859Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"c1d26ec1a24e08c41b4eed6cd4a281a528dd2a96323f389584c153ebdccd783f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-165950_f04a99c5aa78b1fe8d30a6291f8f68f1","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c762f46164888
748a2ca9e5a525e7de747208a14141e10d7830e8dab3c7f2173","pid":2229,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c762f46164888748a2ca9e5a525e7de747208a14141e10d7830e8dab3c7f2173","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c762f46164888748a2ca9e5a525e7de747208a14141e10d7830e8dab3c7f2173/rootfs","created":"2022-10-31T17:00:31.186453275Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-proxy:v1.24.4","io.kubernetes.cri.sandbox-id":"08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489","io.kubernetes.cri.sandbox-name":"kube-proxy-54b5q","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ca1bd3bcca0419f20c372cc6baa843c8373d756258258a9103334a22b452c16d","pid":1648,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca1bd3bcca0419f20c372cc6baa843c8373d756258258a9103
334a22b452c16d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca1bd3bcca0419f20c372cc6baa843c8373d756258258a9103334a22b452c16d/rootfs","created":"2022-10-31T17:00:11.813286817Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-scheduler:v1.24.4","io.kubernetes.cri.sandbox-id":"a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3","pid":3530,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3/rootfs","created":"2022-10
-31T17:01:17.36677191Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_5031015c-081e-49e2-8d46-09fd879a755c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2","pid":2590,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2/rootfs","created":"2022-10-31T17:0
0:48.752117294Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-8wsrc_8e76d465-ae9a-4121-b7ed-1ef94dd20b7e","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-8wsrc","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7","pid":3398,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7/rootfs","created":"20
22-10-31T17:01:17.15855458Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-165950_f7f285bbceeae66435f07854fddd011c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4","pid":2191,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb
9fa200bb8ba27ef4/rootfs","created":"2022-10-31T17:00:31.051134048Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-jljff_e66c31a9-8e36-4914-a086-32ba2b3dc004","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-jljff","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"}]
I1031 17:01:26.220196 123788 cri.go:124] list returned 25 containers
I1031 17:01:26.220215 123788 cri.go:127] container: {ID:08af09ff6dc7343fadd5f527821607e0a139864f2cf045f41ddb8a637dd3684e Status:running}
I1031 17:01:26.220253 123788 cri.go:129] skipping 08af09ff6dc7343fadd5f527821607e0a139864f2cf045f41ddb8a637dd3684e - not in ps
I1031 17:01:26.220265 123788 cri.go:127] container: {ID:08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489 Status:running}
I1031 17:01:26.220285 123788 cri.go:129] skipping 08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489 - not in ps
I1031 17:01:26.220298 123788 cri.go:127] container: {ID:0e1033e5cdbc18bbf5d5b9fb465ba7904c14d9cf096e385708e56a44984ea823 Status:running}
I1031 17:01:26.220316 123788 cri.go:129] skipping 0e1033e5cdbc18bbf5d5b9fb465ba7904c14d9cf096e385708e56a44984ea823 - not in ps
I1031 17:01:26.220327 123788 cri.go:127] container: {ID:0f97f1cbba7302aeb3085b591e2b35bc859465b29b1dbeeabec247e6d5bae53f Status:running}
I1031 17:01:26.220336 123788 cri.go:129] skipping 0f97f1cbba7302aeb3085b591e2b35bc859465b29b1dbeeabec247e6d5bae53f - not in ps
I1031 17:01:26.220347 123788 cri.go:127] container: {ID:10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322 Status:running}
I1031 17:01:26.220360 123788 cri.go:129] skipping 10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322 - not in ps
I1031 17:01:26.220369 123788 cri.go:127] container: {ID:1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95 Status:running}
I1031 17:01:26.220377 123788 cri.go:129] skipping 1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95 - not in ps
I1031 17:01:26.220385 123788 cri.go:127] container: {ID:24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4 Status:running}
I1031 17:01:26.220398 123788 cri.go:129] skipping 24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4 - not in ps
I1031 17:01:26.220409 123788 cri.go:127] container: {ID:4c13d0592c58fca24c43401e993684c5570ddc76b74d030729c3dcc469b40b53 Status:running}
I1031 17:01:26.220422 123788 cri.go:129] skipping 4c13d0592c58fca24c43401e993684c5570ddc76b74d030729c3dcc469b40b53 - not in ps
I1031 17:01:26.220433 123788 cri.go:127] container: {ID:534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2 Status:running}
I1031 17:01:26.220445 123788 cri.go:129] skipping 534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2 - not in ps
I1031 17:01:26.220456 123788 cri.go:127] container: {ID:715100db6ef995efee732d28990297ac96df6f601c86ae90c8ca69f54dc02d49 Status:running}
I1031 17:01:26.220468 123788 cri.go:129] skipping 715100db6ef995efee732d28990297ac96df6f601c86ae90c8ca69f54dc02d49 - not in ps
I1031 17:01:26.220479 123788 cri.go:127] container: {ID:72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f Status:running}
I1031 17:01:26.220491 123788 cri.go:129] skipping 72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f - not in ps
I1031 17:01:26.220498 123788 cri.go:127] container: {ID:8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1 Status:running}
I1031 17:01:26.220510 123788 cri.go:129] skipping 8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1 - not in ps
I1031 17:01:26.220522 123788 cri.go:127] container: {ID:91216b43bb46df97ee8f7081d129445a14177f38a7784ca329cb279b5be6d0bf Status:running}
I1031 17:01:26.220540 123788 cri.go:129] skipping 91216b43bb46df97ee8f7081d129445a14177f38a7784ca329cb279b5be6d0bf - not in ps
I1031 17:01:26.220551 123788 cri.go:127] container: {ID:92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e Status:running}
I1031 17:01:26.220564 123788 cri.go:133] skipping {92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e running}: state = "running", want "paused"
I1031 17:01:26.220578 123788 cri.go:127] container: {ID:a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383 Status:running}
I1031 17:01:26.220590 123788 cri.go:129] skipping a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383 - not in ps
I1031 17:01:26.220601 123788 cri.go:127] container: {ID:ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2c9ac6a5383 Status:running}
I1031 17:01:26.220614 123788 cri.go:129] skipping ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2c9ac6a5383 - not in ps
I1031 17:01:26.220625 123788 cri.go:127] container: {ID:ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9 Status:running}
I1031 17:01:26.220637 123788 cri.go:129] skipping ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9 - not in ps
I1031 17:01:26.220648 123788 cri.go:127] container: {ID:bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865 Status:running}
I1031 17:01:26.220660 123788 cri.go:129] skipping bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865 - not in ps
I1031 17:01:26.220670 123788 cri.go:127] container: {ID:c1d26ec1a24e08c41b4eed6cd4a281a528dd2a96323f389584c153ebdccd783f Status:running}
I1031 17:01:26.220679 123788 cri.go:129] skipping c1d26ec1a24e08c41b4eed6cd4a281a528dd2a96323f389584c153ebdccd783f - not in ps
I1031 17:01:26.220689 123788 cri.go:127] container: {ID:c762f46164888748a2ca9e5a525e7de747208a14141e10d7830e8dab3c7f2173 Status:running}
I1031 17:01:26.220702 123788 cri.go:129] skipping c762f46164888748a2ca9e5a525e7de747208a14141e10d7830e8dab3c7f2173 - not in ps
I1031 17:01:26.220712 123788 cri.go:127] container: {ID:ca1bd3bcca0419f20c372cc6baa843c8373d756258258a9103334a22b452c16d Status:running}
I1031 17:01:26.220724 123788 cri.go:129] skipping ca1bd3bcca0419f20c372cc6baa843c8373d756258258a9103334a22b452c16d - not in ps
I1031 17:01:26.220735 123788 cri.go:127] container: {ID:d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3 Status:running}
I1031 17:01:26.220749 123788 cri.go:129] skipping d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3 - not in ps
I1031 17:01:26.220764 123788 cri.go:127] container: {ID:ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2 Status:running}
I1031 17:01:26.220776 123788 cri.go:129] skipping ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2 - not in ps
I1031 17:01:26.220787 123788 cri.go:127] container: {ID:debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7 Status:running}
I1031 17:01:26.220800 123788 cri.go:129] skipping debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7 - not in ps
I1031 17:01:26.220811 123788 cri.go:127] container: {ID:e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4 Status:running}
I1031 17:01:26.220823 123788 cri.go:129] skipping e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4 - not in ps
I1031 17:01:26.220874 123788 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1031 17:01:26.228503 123788 kubeadm.go:411] found existing configuration files, will attempt cluster restart
I1031 17:01:26.228526 123788 kubeadm.go:627] restartCluster start
I1031 17:01:26.228569 123788 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1031 17:01:26.242514 123788 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1031 17:01:26.243313 123788 kubeconfig.go:92] found "test-preload-165950" server: "https://192.168.67.2:8443"
I1031 17:01:26.244383 123788 kapi.go:59] client config for test-preload-165950: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/client.crt", KeyFile:"/home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/client.key", CAFile:"/home/jenkins/minikube-integration/15232-3650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1782ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1031 17:01:26.245028 123788 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1031 17:01:26.254439 123788 kubeadm.go:594] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml 2022-10-31 17:00:07.362490176 +0000
+++ /var/tmp/minikube/kubeadm.yaml.new 2022-10-31 17:01:25.658180104 +0000
@@ -38,7 +38,7 @@
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
-kubernetesVersion: v1.24.4
+kubernetesVersion: v1.24.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
-- /stdout --
I1031 17:01:26.254466 123788 kubeadm.go:1114] stopping kube-system containers ...
I1031 17:01:26.254477 123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I1031 17:01:26.254530 123788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1031 17:01:26.279826 123788 cri.go:87] found id: "9b493051380ea5f84db2bf6d6b500816b4bfc7d73549a3fc267337671408794c"
I1031 17:01:26.279858 123788 cri.go:87] found id: "9523dbf74df3ff703859059525cf2e837089463bffc76ca75ed4636d64233fbf"
I1031 17:01:26.279865 123788 cri.go:87] found id: "92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e"
I1031 17:01:26.279880 123788 cri.go:87] found id: ""
I1031 17:01:26.279886 123788 cri.go:232] Stopping containers: [9b493051380ea5f84db2bf6d6b500816b4bfc7d73549a3fc267337671408794c 9523dbf74df3ff703859059525cf2e837089463bffc76ca75ed4636d64233fbf 92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e]
I1031 17:01:26.279928 123788 ssh_runner.go:195] Run: which crictl
I1031 17:01:26.283140 123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 9b493051380ea5f84db2bf6d6b500816b4bfc7d73549a3fc267337671408794c 9523dbf74df3ff703859059525cf2e837089463bffc76ca75ed4636d64233fbf 92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e
I1031 17:01:26.348311 123788 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I1031 17:01:26.415283 123788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1031 17:01:26.422710 123788 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5643 Oct 31 17:00 /etc/kubernetes/admin.conf
-rw------- 1 root root 5656 Oct 31 17:00 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2015 Oct 31 17:00 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5600 Oct 31 17:00 /etc/kubernetes/scheduler.conf
I1031 17:01:26.422771 123788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1031 17:01:26.429820 123788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1031 17:01:26.436664 123788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1031 17:01:26.443399 123788 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I1031 17:01:26.443466 123788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1031 17:01:26.450583 123788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1031 17:01:26.457143 123788 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I1031 17:01:26.457191 123788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1031 17:01:26.463634 123788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1031 17:01:26.471032 123788 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I1031 17:01:26.471057 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I1031 17:01:26.714848 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I1031 17:01:27.216857 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I1031 17:01:27.525201 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I1031 17:01:27.574451 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I1031 17:01:27.654849 123788 api_server.go:51] waiting for apiserver process to appear ...
I1031 17:01:27.654955 123788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1031 17:01:27.673572 123788 api_server.go:71] duration metric: took 18.72073ms to wait for apiserver process to appear ...
I1031 17:01:27.673610 123788 api_server.go:87] waiting for apiserver healthz status ...
I1031 17:01:27.673630 123788 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I1031 17:01:27.678700 123788 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
ok
I1031 17:01:27.685812 123788 api_server.go:140] control plane version: v1.24.4
W1031 17:01:27.685841 123788 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1031 17:01:28.187382 123788 api_server.go:140] control plane version: v1.24.4
W1031 17:01:28.187416 123788 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1031 17:01:28.687372 123788 api_server.go:140] control plane version: v1.24.4
W1031 17:01:28.687411 123788 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1031 17:01:29.187825 123788 api_server.go:140] control plane version: v1.24.4
W1031 17:01:29.187861 123788 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1031 17:01:29.687064 123788 api_server.go:140] control plane version: v1.24.4
W1031 17:01:29.687093 123788 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
W1031 17:01:30.187422 123788 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1031 17:01:30.686425 123788 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1031 17:01:31.186366 123788 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
I1031 17:01:35.664099 123788 api_server.go:140] control plane version: v1.24.6
I1031 17:01:35.664206 123788 api_server.go:130] duration metric: took 7.990587678s to wait for apiserver health ...
I1031 17:01:35.664232 123788 cni.go:95] Creating CNI manager for ""
I1031 17:01:35.664274 123788 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I1031 17:01:35.666396 123788 out.go:177] * Configuring CNI (Container Networking Interface) ...
I1031 17:01:35.668255 123788 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I1031 17:01:35.857942 123788 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.6/kubectl ...
I1031 17:01:35.857986 123788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I1031 17:01:35.965517 123788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1031 17:01:37.314933 123788 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.349353719s)
I1031 17:01:37.314969 123788 system_pods.go:43] waiting for kube-system pods to appear ...
I1031 17:01:37.323258 123788 system_pods.go:59] 8 kube-system pods found
I1031 17:01:37.323308 123788 system_pods.go:61] "coredns-6d4b75cb6d-8wsrc" [8e76d465-ae9a-4121-b7ed-1ef94dd20b7e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 17:01:37.323319 123788 system_pods.go:61] "etcd-test-preload-165950" [1738672d-0339-423c-9013-d39e8cbb16c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1031 17:01:37.323333 123788 system_pods.go:61] "kindnet-jljff" [e66c31a9-8e36-4914-a086-32ba2b3dc004] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I1031 17:01:37.323348 123788 system_pods.go:61] "kube-apiserver-test-preload-165950" [a505e0cf-4d56-47bf-865b-6052277ce195] Pending
I1031 17:01:37.323358 123788 system_pods.go:61] "kube-controller-manager-test-preload-165950" [ebf46104-24d9-427e-b5af-643a80e0aceb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1031 17:01:37.323374 123788 system_pods.go:61] "kube-proxy-54b5q" [0ff95637-a367-440b-918f-495391f2f1cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1031 17:01:37.323384 123788 system_pods.go:61] "kube-scheduler-test-preload-165950" [5a7cd673-4c3a-4123-9be5-5f44a196a478] Pending
I1031 17:01:37.323397 123788 system_pods.go:61] "storage-provisioner" [5031015c-081e-49e2-8d46-09fd879a755c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 17:01:37.323409 123788 system_pods.go:74] duration metric: took 8.433081ms to wait for pod list to return data ...
I1031 17:01:37.323422 123788 node_conditions.go:102] verifying NodePressure condition ...
I1031 17:01:37.326311 123788 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1031 17:01:37.326342 123788 node_conditions.go:123] node cpu capacity is 8
I1031 17:01:37.326356 123788 node_conditions.go:105] duration metric: took 2.929267ms to run NodePressure ...
I1031 17:01:37.326375 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I1031 17:01:37.573644 123788 kubeadm.go:763] waiting for restarted kubelet to initialise ...
I1031 17:01:37.578158 123788 kubeadm.go:778] kubelet initialised
I1031 17:01:37.578189 123788 kubeadm.go:779] duration metric: took 4.510409ms waiting for restarted kubelet to initialise ...
I1031 17:01:37.578198 123788 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1031 17:01:37.583642 123788 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-8wsrc" in "kube-system" namespace to be "Ready" ...
I1031 17:01:39.594948 123788 pod_ready.go:102] pod "coredns-6d4b75cb6d-8wsrc" in "kube-system" namespace has status "Ready":"False"
I1031 17:01:42.094075 123788 pod_ready.go:102] pod "coredns-6d4b75cb6d-8wsrc" in "kube-system" namespace has status "Ready":"False"
I1031 17:01:43.095366 123788 pod_ready.go:92] pod "coredns-6d4b75cb6d-8wsrc" in "kube-system" namespace has status "Ready":"True"
I1031 17:01:43.095404 123788 pod_ready.go:81] duration metric: took 5.511730023s waiting for pod "coredns-6d4b75cb6d-8wsrc" in "kube-system" namespace to be "Ready" ...
I1031 17:01:43.095417 123788 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-165950" in "kube-system" namespace to be "Ready" ...
I1031 17:01:45.107196 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:01:47.606767 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:01:50.106591 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:01:52.606128 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:01:55.106948 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:01:57.606675 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:01:59.606942 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:01.607143 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:03.607189 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:06.106997 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:08.606022 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:10.607066 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:12.607191 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:15.106122 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:17.106164 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:19.106356 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:21.106711 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:23.606999 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:26.106549 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:28.107170 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:30.606839 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:33.106308 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:35.606836 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:38.106617 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:40.107031 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:42.606997 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:45.105907 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:47.106139 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:49.606661 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:51.607461 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:54.107427 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:56.607579 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:59.106638 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:01.106850 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:03.606788 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:05.606874 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:08.106321 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:10.106538 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:12.106959 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:14.607205 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:16.607305 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:19.105988 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:21.106170 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:23.107105 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:25.607263 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:28.106356 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:30.107148 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:32.606490 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:35.105741 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:37.106647 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:39.106715 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:41.606595 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:44.106322 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:46.106599 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:48.106645 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:50.607046 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:53.106597 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:55.607036 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:58.106177 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:00.106478 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:02.106672 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:04.106777 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:06.606029 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:08.606391 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:10.606890 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:13.105929 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:15.106871 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:17.605837 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:19.606273 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:21.606690 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:23.608947 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:26.106036 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:28.106069 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:30.106922 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:32.606315 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:34.606779 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:36.607034 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:39.106139 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:41.106298 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:43.106379 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:45.606574 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:47.606629 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:50.106351 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:52.606744 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:55.106115 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:57.606837 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:00.107089 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:02.606977 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:05.106235 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:07.106494 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:09.606180 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:11.607064 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:14.106300 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:16.106339 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:18.605987 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:20.606927 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:23.106287 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:25.606564 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:28.106222 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:30.106425 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:32.607544 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:35.105790 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:37.106524 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:39.106668 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:41.606128 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:43.100897 123788 pod_ready.go:81] duration metric: took 4m0.005465717s waiting for pod "etcd-test-preload-165950" in "kube-system" namespace to be "Ready" ...
E1031 17:05:43.100926 123788 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "etcd-test-preload-165950" in "kube-system" namespace to be "Ready" (will not retry!)
I1031 17:05:43.100947 123788 pod_ready.go:38] duration metric: took 4m5.522739337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1031 17:05:43.100986 123788 kubeadm.go:631] restartCluster took 4m16.872448037s
W1031 17:05:43.101155 123788 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
I1031 17:05:43.101190 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1031 17:05:44.844963 123788 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.743753735s)
I1031 17:05:44.845025 123788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1031 17:05:44.855523 123788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1031 17:05:44.862648 123788 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1031 17:05:44.862707 123788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1031 17:05:44.870144 123788 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1031 17:05:44.870199 123788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1031 17:05:44.907996 123788 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
I1031 17:05:44.908047 123788 kubeadm.go:317] [preflight] Running pre-flight checks
I1031 17:05:44.935802 123788 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
I1031 17:05:44.935928 123788 kubeadm.go:317] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
I1031 17:05:44.935973 123788 kubeadm.go:317] [0;37mOS[0m: [0;32mLinux[0m
I1031 17:05:44.936020 123788 kubeadm.go:317] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1031 17:05:44.936060 123788 kubeadm.go:317] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1031 17:05:44.936139 123788 kubeadm.go:317] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1031 17:05:44.936189 123788 kubeadm.go:317] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1031 17:05:44.936256 123788 kubeadm.go:317] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1031 17:05:44.936353 123788 kubeadm.go:317] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1031 17:05:44.936421 123788 kubeadm.go:317] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1031 17:05:44.936478 123788 kubeadm.go:317] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1031 17:05:44.936542 123788 kubeadm.go:317] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1031 17:05:45.016629 123788 kubeadm.go:317] W1031 17:05:44.903005 6621 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I1031 17:05:45.016840 123788 kubeadm.go:317] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
I1031 17:05:45.016930 123788 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1031 17:05:45.016992 123788 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
I1031 17:05:45.017027 123788 kubeadm.go:317] [ERROR Port-2379]: Port 2379 is in use
I1031 17:05:45.017070 123788 kubeadm.go:317] [ERROR Port-2380]: Port 2380 is in use
I1031 17:05:45.017152 123788 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
I1031 17:05:45.017213 123788 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
W1031 17:05:45.017401 123788 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1031 17:05:44.903005 6621 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1031 17:05:44.903005 6621 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
I1031 17:05:45.017440 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1031 17:05:45.355913 123788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1031 17:05:45.365437 123788 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1031 17:05:45.365484 123788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1031 17:05:45.372598 123788 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1031 17:05:45.372638 123788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1031 17:05:45.410978 123788 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
I1031 17:05:45.411059 123788 kubeadm.go:317] [preflight] Running pre-flight checks
I1031 17:05:45.437866 123788 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
I1031 17:05:45.437950 123788 kubeadm.go:317] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
I1031 17:05:45.438007 123788 kubeadm.go:317] [0;37mOS[0m: [0;32mLinux[0m
I1031 17:05:45.438080 123788 kubeadm.go:317] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1031 17:05:45.438188 123788 kubeadm.go:317] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1031 17:05:45.438265 123788 kubeadm.go:317] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1031 17:05:45.438327 123788 kubeadm.go:317] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1031 17:05:45.438408 123788 kubeadm.go:317] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1031 17:05:45.438474 123788 kubeadm.go:317] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1031 17:05:45.438542 123788 kubeadm.go:317] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1031 17:05:45.438609 123788 kubeadm.go:317] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1031 17:05:45.438681 123788 kubeadm.go:317] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1031 17:05:45.506713 123788 kubeadm.go:317] W1031 17:05:45.405956 6886 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I1031 17:05:45.506996 123788 kubeadm.go:317] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
I1031 17:05:45.507114 123788 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1031 17:05:45.507178 123788 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
I1031 17:05:45.507221 123788 kubeadm.go:317] [ERROR Port-2379]: Port 2379 is in use
I1031 17:05:45.507264 123788 kubeadm.go:317] [ERROR Port-2380]: Port 2380 is in use
I1031 17:05:45.507371 123788 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
I1031 17:05:45.507485 123788 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
I1031 17:05:45.507500 123788 kubeadm.go:398] StartCluster complete in 4m19.348589229s
I1031 17:05:45.507531 123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1031 17:05:45.507575 123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1031 17:05:45.530536 123788 cri.go:87] found id: ""
I1031 17:05:45.530565 123788 logs.go:274] 0 containers: []
W1031 17:05:45.530573 123788 logs.go:276] No container was found matching "kube-apiserver"
I1031 17:05:45.530579 123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1031 17:05:45.530626 123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1031 17:05:45.554752 123788 cri.go:87] found id: ""
I1031 17:05:45.554777 123788 logs.go:274] 0 containers: []
W1031 17:05:45.554783 123788 logs.go:276] No container was found matching "etcd"
I1031 17:05:45.554789 123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1031 17:05:45.554831 123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1031 17:05:45.578518 123788 cri.go:87] found id: ""
I1031 17:05:45.578542 123788 logs.go:274] 0 containers: []
W1031 17:05:45.578548 123788 logs.go:276] No container was found matching "coredns"
I1031 17:05:45.578554 123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1031 17:05:45.578603 123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1031 17:05:45.602333 123788 cri.go:87] found id: ""
I1031 17:05:45.602356 123788 logs.go:274] 0 containers: []
W1031 17:05:45.602363 123788 logs.go:276] No container was found matching "kube-scheduler"
I1031 17:05:45.602368 123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1031 17:05:45.602408 123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1031 17:05:45.625824 123788 cri.go:87] found id: ""
I1031 17:05:45.625847 123788 logs.go:274] 0 containers: []
W1031 17:05:45.625853 123788 logs.go:276] No container was found matching "kube-proxy"
I1031 17:05:45.625859 123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1031 17:05:45.625920 123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1031 17:05:45.649488 123788 cri.go:87] found id: ""
I1031 17:05:45.649513 123788 logs.go:274] 0 containers: []
W1031 17:05:45.649519 123788 logs.go:276] No container was found matching "kubernetes-dashboard"
I1031 17:05:45.649526 123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1031 17:05:45.649574 123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1031 17:05:45.672881 123788 cri.go:87] found id: ""
I1031 17:05:45.672907 123788 logs.go:274] 0 containers: []
W1031 17:05:45.672914 123788 logs.go:276] No container was found matching "storage-provisioner"
I1031 17:05:45.672920 123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1031 17:05:45.672965 123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1031 17:05:45.695705 123788 cri.go:87] found id: ""
I1031 17:05:45.695729 123788 logs.go:274] 0 containers: []
W1031 17:05:45.695736 123788 logs.go:276] No container was found matching "kube-controller-manager"
I1031 17:05:45.695744 123788 logs.go:123] Gathering logs for describe nodes ...
I1031 17:05:45.695756 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1031 17:05:45.827779 123788 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1031 17:05:45.827803 123788 logs.go:123] Gathering logs for containerd ...
I1031 17:05:45.827814 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1031 17:05:45.882431 123788 logs.go:123] Gathering logs for container status ...
I1031 17:05:45.882482 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1031 17:05:45.908973 123788 logs.go:123] Gathering logs for kubelet ...
I1031 17:05:45.909003 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1031 17:05:45.967611 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661461 4266 projected.go:192] Error preparing data for projected volume kube-api-access-8mn6l for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.968060 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661580 4266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5031015c-081e-49e2-8d46-09fd879a755c-kube-api-access-8mn6l podName:5031015c-081e-49e2-8d46-09fd879a755c nodeName:}" failed. No retries permitted until 2022-10-31 17:01:36.661550988 +0000 UTC m=+9.136031253 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8mn6l" (UniqueName: "kubernetes.io/projected/5031015c-081e-49e2-8d46-09fd879a755c-kube-api-access-8mn6l") pod "storage-provisioner" (UID: "5031015c-081e-49e2-8d46-09fd879a755c") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.968229 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: W1031 17:01:35.661699 4266 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.968390 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661728 4266 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.968572 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661819 4266 projected.go:192] Error preparing data for projected volume kube-api-access-d8dpf for pod kube-system/coredns-6d4b75cb6d-8wsrc: failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.968978 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661876 4266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e76d465-ae9a-4121-b7ed-1ef94dd20b7e-kube-api-access-d8dpf podName:8e76d465-ae9a-4121-b7ed-1ef94dd20b7e nodeName:}" failed. No retries permitted until 2022-10-31 17:01:36.661860993 +0000 UTC m=+9.136341257 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d8dpf" (UniqueName: "kubernetes.io/projected/8e76d465-ae9a-4121-b7ed-1ef94dd20b7e-kube-api-access-d8dpf") pod "coredns-6d4b75cb6d-8wsrc" (UID: "8e76d465-ae9a-4121-b7ed-1ef94dd20b7e") : failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.969129 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: W1031 17:01:35.662000 4266 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.969296 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662020 4266 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.969441 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: W1031 17:01:35.662225 4266 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.969602 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662242 4266 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.969778 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662330 4266 projected.go:192] Error preparing data for projected volume kube-api-access-5m45q for pod kube-system/kindnet-jljff: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.970177 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662376 4266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e66c31a9-8e36-4914-a086-32ba2b3dc004-kube-api-access-5m45q podName:e66c31a9-8e36-4914-a086-32ba2b3dc004 nodeName:}" failed. No retries permitted until 2022-10-31 17:01:36.662359447 +0000 UTC m=+9.136839704 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5m45q" (UniqueName: "kubernetes.io/projected/e66c31a9-8e36-4914-a086-32ba2b3dc004-kube-api-access-5m45q") pod "kindnet-jljff" (UID: "e66c31a9-8e36-4914-a086-32ba2b3dc004") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.970359 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662434 4266 projected.go:192] Error preparing data for projected volume kube-api-access-r84wv for pod kube-system/kube-proxy-54b5q: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.970760 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662472 4266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ff95637-a367-440b-918f-495391f2f1cf-kube-api-access-r84wv podName:0ff95637-a367-440b-918f-495391f2f1cf nodeName:}" failed. No retries permitted until 2022-10-31 17:01:36.662457708 +0000 UTC m=+9.136937970 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-r84wv" (UniqueName: "kubernetes.io/projected/0ff95637-a367-440b-918f-495391f2f1cf-kube-api-access-r84wv") pod "kube-proxy-54b5q" (UID: "0ff95637-a367-440b-918f-495391f2f1cf") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
I1031 17:05:45.991682 123788 logs.go:123] Gathering logs for dmesg ...
I1031 17:05:45.991709 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
W1031 17:05:46.006370 123788 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1031 17:05:45.405956 6886 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W1031 17:05:46.006406 123788 out.go:239] *
*
W1031 17:05:46.006520 123788 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1031 17:05:45.405956 6886 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1031 17:05:45.405956 6886 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W1031 17:05:46.006538 123788 out.go:239] *
*
W1031 17:05:46.007299 123788 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1031 17:05:46.010794 123788 out.go:177] X Problems detected in kubelet:
I1031 17:05:46.012324 123788 out.go:177] Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661461 4266 projected.go:192] Error preparing data for projected volume kube-api-access-8mn6l for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
I1031 17:05:46.013853 123788 out.go:177] Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661580 4266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5031015c-081e-49e2-8d46-09fd879a755c-kube-api-access-8mn6l podName:5031015c-081e-49e2-8d46-09fd879a755c nodeName:}" failed. No retries permitted until 2022-10-31 17:01:36.661550988 +0000 UTC m=+9.136031253 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8mn6l" (UniqueName: "kubernetes.io/projected/5031015c-081e-49e2-8d46-09fd879a755c-kube-api-access-8mn6l") pod "storage-provisioner" (UID: "5031015c-081e-49e2-8d46-09fd879a755c") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
I1031 17:05:46.015648 123788 out.go:177] Oct 31 17:01:35 test-preload-165950 kubelet[4266]: W1031 17:01:35.661699 4266 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
I1031 17:05:46.017937 123788 out.go:177]
W1031 17:05:46.019427 123788 out.go:239] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1031 17:05:45.405956 6886 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1031 17:05:45.405956 6886 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W1031 17:05:46.019527 123788 out.go:239] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
* Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
W1031 17:05:46.019585 123788 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/5484
* Related issue: https://github.com/kubernetes/minikube/issues/5484
I1031 17:05:46.021064 123788 out.go:177]
** /stderr **
preload_test.go:69: out/minikube-linux-amd64 start -p test-preload-165950 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --container-runtime=containerd --kubernetes-version=v1.24.6 failed: exit status 81
panic.go:522: *** TestPreload FAILED at 2022-10-31 17:05:46.06141389 +0000 UTC m=+1788.836992767
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect test-preload-165950
helpers_test.go:235: (dbg) docker inspect test-preload-165950:
-- stdout --
[
{
"Id": "31321530ec32d3664bca2dd5534ecdafccad37b8d3386abfb104804c4c545f5b",
"Created": "2022-10-31T16:59:51.480968101Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 120574,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-10-31T16:59:51.931253166Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
"ResolvConfPath": "/var/lib/docker/containers/31321530ec32d3664bca2dd5534ecdafccad37b8d3386abfb104804c4c545f5b/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/31321530ec32d3664bca2dd5534ecdafccad37b8d3386abfb104804c4c545f5b/hostname",
"HostsPath": "/var/lib/docker/containers/31321530ec32d3664bca2dd5534ecdafccad37b8d3386abfb104804c4c545f5b/hosts",
"LogPath": "/var/lib/docker/containers/31321530ec32d3664bca2dd5534ecdafccad37b8d3386abfb104804c4c545f5b/31321530ec32d3664bca2dd5534ecdafccad37b8d3386abfb104804c4c545f5b-json.log",
"Name": "/test-preload-165950",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"test-preload-165950:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "test-preload-165950",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/9162f9aa27622193176e2a53f1639007e77e951edd086c49393a77b26bf96d70-init/diff:/var/lib/docker/overlay2/850407c9352fc6d39f5a61f0f7868bc687359dfa2a9e604aacedd9e4180b6b24/diff:/var/lib/docker/overlay2/21aaafded5bd8cd556e28d44c5789deca54d553c1b7434f81407bd7fcd1957e2/diff:/var/lib/docker/overlay2/6092cf791661e4cab1851c6157178d18fd0167b1f47a6bebec580856fb033b44/diff:/var/lib/docker/overlay2/de1b6fab5ea890ce9ec3ab284acb657037d204cfa01fe082b7ab7fb1c0539f4a/diff:/var/lib/docker/overlay2/4ce8b04194bb323d53c06b240875a6203e31c8f7f41d68021a3a9c268299cbed/diff:/var/lib/docker/overlay2/efdd112bff28ec4eeb4274df5357bc6a943d954bf3bb5969c95a3f396318e5f2/diff:/var/lib/docker/overlay2/bf27ecc71ffb48aba0eb712986cbc98c99838dc8b04631580d9a9495f718f594/diff:/var/lib/docker/overlay2/448bbda6d5530c89aca7714db71b5eb84689a6dba7ac558086a7568817db54ae/diff:/var/lib/docker/overlay2/b43560491d25a8924ac5cae2ec4dc68deb89b0f8f1e1b7a720313dc4eeb82428/diff:/var/lib/docker/overlay2/2027e3
3b3f092c531efa1f98cabb990a64b3ff51978a38e4261ef8e82655e56d/diff:/var/lib/docker/overlay2/40d06c11aaa05bdf4d5349d7d00fdf7d8f962768ce49b8f03d4d2d5a23706a83/diff:/var/lib/docker/overlay2/3a1bdaf48ececa097bf7b4c7e715cdc5045b596a2cb2bf0d2d335363c91b7763/diff:/var/lib/docker/overlay2/a37c63314afa70bd7e634537d33bcefbffbbe9f43c8aa45d9d42bd58cc3b0cf8/diff:/var/lib/docker/overlay2/ff91a87ac6071b8ab64a547410e1499ce95011395ea036dd714d0dd5129adb37/diff:/var/lib/docker/overlay2/aefdb5f8ac62063ccf24e1bc21262559900c234b9c151acd755a4b834d51fea9/diff:/var/lib/docker/overlay2/915c92a89aba7500f1323ec1a9c9a53d856e818f9776d9f9ed08bf36936d3e4a/diff:/var/lib/docker/overlay2/52c13726cbf2ed741bd08a4fd814eca88e84b1d329661e62d858be944b3756fa/diff:/var/lib/docker/overlay2/459b8ced782783b6c14513346d3291aeaa7bf95628d52d5734ceb8e3dc2bb34a/diff:/var/lib/docker/overlay2/15b295bfa3bda6886453bc187c23d72b25ee63f5085ee0f7f33e1c16159f3458/diff:/var/lib/docker/overlay2/23b0f6d1317fd997d142b8b463d727f2337496dada67bd1d2d3b0e9e864b6c6b/diff:/var/lib/d
ocker/overlay2/5865c95ad7cd03f9b4844f71209de766041b054c00595d5aec780c06ae768435/diff:/var/lib/docker/overlay2/efa08e39c835181ac59410e6fa91805bdf6038812cf9de2fe6166b28ddbd0551/diff:/var/lib/docker/overlay2/e0b9a735c6e765ddbdea44d18a2b26b9b2c3db322dca7fbab94d6e76ab322d51/diff:/var/lib/docker/overlay2/5643dd6e2ea4886915404d641ac2a2f0327156d44c5cd2960ec0ce17a61bedb2/diff:/var/lib/docker/overlay2/4f789b09379fe08af21ac5ede6a916c169e328eac752d559ecde59f6f36263ea/diff:/var/lib/docker/overlay2/4fdd55958a1cbe05aa4c0d860e201090b87575a39b37ea9555600f8cb3c2256c/diff:/var/lib/docker/overlay2/db64f95c578859a9eb3b7bb1debcf894e5466441c4c6c27c9a3eae7247029669/diff:/var/lib/docker/overlay2/6ea16e3482414ff15bfc6317e5fb3463df41afc3fa76d7b22ef86e1a735fbf2d/diff:/var/lib/docker/overlay2/2141b9e79d9eca44b4934f0ab5e90e3a7a6326ad619ce3e981da60d3b9397952/diff:/var/lib/docker/overlay2/ed7d69a3a4de28360197cbde205a3c218b2c785ad29581c25ae9d74275fbc3af/diff:/var/lib/docker/overlay2/7a003859a39e8ad3bd9681a6e25c7687c68b45396a9bd9309f5f2fc5a6d
b937f/diff:/var/lib/docker/overlay2/9f343157cfc9dd91c334ef0927fcbdff9b1c543bc670a05b547ad650c42a9e4e/diff:/var/lib/docker/overlay2/1895e41d6462ac28032e1938f1c755f37d5063dbfcfce66c80a1bb5542592f87/diff:/var/lib/docker/overlay2/139059382b6f47a4d917321fc96bb88b4e4496bc6d72d5c140f22414932cd23a/diff:/var/lib/docker/overlay2/877f4b5fd322b19211f62544018b39a1fc4b920707d11dc957cac06f2232d4b5/diff:/var/lib/docker/overlay2/7f935ec11ddf890b56355eff56a25f995efb95fe3f8718078d517e5126fc40af/diff:/var/lib/docker/overlay2/f746de1e06eaa48a0ff284cbeec7e6f78c3eb97d1a90e020d82d10c2654236e7/diff:/var/lib/docker/overlay2/f58fee49407523fa2a2a815cfb285f088abd1fc7b3196c3c1a6b27a8cc1d4a3f/diff:/var/lib/docker/overlay2/2f9e685ccc40a5063568a58dc39e286eab6aa4fd66ad71614b75fb8082c6c201/diff:/var/lib/docker/overlay2/5d49dd0a636da4d0a250625e83cf665e98dba840590d94ac41b6f345e76aa187/diff:/var/lib/docker/overlay2/818cc610ded8dc62555773ef1e35bea879ef657b00a70e6c878f5424f518134a/diff:/var/lib/docker/overlay2/c98da52ad37a10af980b89a4e4ddd50b85ffa2
12a2847b428571f2544cb3eeb7/diff",
"MergedDir": "/var/lib/docker/overlay2/9162f9aa27622193176e2a53f1639007e77e951edd086c49393a77b26bf96d70/merged",
"UpperDir": "/var/lib/docker/overlay2/9162f9aa27622193176e2a53f1639007e77e951edd086c49393a77b26bf96d70/diff",
"WorkDir": "/var/lib/docker/overlay2/9162f9aa27622193176e2a53f1639007e77e951edd086c49393a77b26bf96d70/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "test-preload-165950",
"Source": "/var/lib/docker/volumes/test-preload-165950/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "test-preload-165950",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "test-preload-165950",
"name.minikube.sigs.k8s.io": "test-preload-165950",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "cfccc3b44d496a91df157bced05afac5b142fddf1d4354ac1695001e0e240870",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49277"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49276"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49273"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49275"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49274"
}
]
},
"SandboxKey": "/var/run/docker/netns/cfccc3b44d49",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"test-preload-165950": {
"IPAMConfig": {
"IPv4Address": "192.168.67.2"
},
"Links": null,
"Aliases": [
"31321530ec32",
"test-preload-165950"
],
"NetworkID": "e0378850a42df319b63eed4a878977d5ca7d60ed961bbdc3e2d810f624175c13",
"EndpointID": "680c74d6c0d63034b9a03de0b279adb28e402d3efdcb722bb8ca748f3bbb5d9a",
"Gateway": "192.168.67.1",
"IPAddress": "192.168.67.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:43:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-165950 -n test-preload-165950
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-165950 -n test-preload-165950: exit status 2 (356.870202ms)
-- stdout --
Running
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p test-preload-165950 logs -n 25
helpers_test.go:252: TestPreload logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| ssh | multinode-165059 ssh -n | multinode-165059 | jenkins | v1.27.1 | 31 Oct 22 16:53 UTC | 31 Oct 22 16:53 UTC |
| | multinode-165059-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-165059 cp multinode-165059-m03:/home/docker/cp-test.txt | multinode-165059 | jenkins | v1.27.1 | 31 Oct 22 16:53 UTC | 31 Oct 22 16:53 UTC |
| | multinode-165059:/home/docker/cp-test_multinode-165059-m03_multinode-165059.txt | | | | | |
| ssh | multinode-165059 ssh -n | multinode-165059 | jenkins | v1.27.1 | 31 Oct 22 16:53 UTC | 31 Oct 22 16:53 UTC |
| | multinode-165059-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-165059 ssh -n multinode-165059 sudo cat | multinode-165059 | jenkins | v1.27.1 | 31 Oct 22 16:53 UTC | 31 Oct 22 16:53 UTC |
| | /home/docker/cp-test_multinode-165059-m03_multinode-165059.txt | | | | | |
| cp | multinode-165059 cp multinode-165059-m03:/home/docker/cp-test.txt | multinode-165059 | jenkins | v1.27.1 | 31 Oct 22 16:53 UTC | 31 Oct 22 16:53 UTC |
| | multinode-165059-m02:/home/docker/cp-test_multinode-165059-m03_multinode-165059-m02.txt | | | | | |
| ssh | multinode-165059 ssh -n | multinode-165059 | jenkins | v1.27.1 | 31 Oct 22 16:53 UTC | 31 Oct 22 16:53 UTC |
| | multinode-165059-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-165059 ssh -n multinode-165059-m02 sudo cat | multinode-165059 | jenkins | v1.27.1 | 31 Oct 22 16:53 UTC | 31 Oct 22 16:53 UTC |
| | /home/docker/cp-test_multinode-165059-m03_multinode-165059-m02.txt | | | | | |
| node | multinode-165059 node stop m03 | multinode-165059 | jenkins | v1.27.1 | 31 Oct 22 16:53 UTC | 31 Oct 22 16:53 UTC |
| node | multinode-165059 node start | multinode-165059 | jenkins | v1.27.1 | 31 Oct 22 16:53 UTC | 31 Oct 22 16:54 UTC |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-165059 | multinode-165059 | jenkins | v1.27.1 | 31 Oct 22 16:54 UTC | |
| stop | -p multinode-165059 | multinode-165059 | jenkins | v1.27.1 | 31 Oct 22 16:54 UTC | 31 Oct 22 16:54 UTC |
| start | -p multinode-165059 | multinode-165059 | jenkins | v1.27.1 | 31 Oct 22 16:54 UTC | 31 Oct 22 16:56 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-165059 | multinode-165059 | jenkins | v1.27.1 | 31 Oct 22 16:56 UTC | |
| node | multinode-165059 node delete | multinode-165059 | jenkins | v1.27.1 | 31 Oct 22 16:56 UTC | 31 Oct 22 16:56 UTC |
| | m03 | | | | | |
| stop | multinode-165059 stop | multinode-165059 | jenkins | v1.27.1 | 31 Oct 22 16:56 UTC | 31 Oct 22 16:57 UTC |
| start | -p multinode-165059 | multinode-165059 | jenkins | v1.27.1 | 31 Oct 22 16:57 UTC | 31 Oct 22 16:59 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| node | list -p multinode-165059 | multinode-165059 | jenkins | v1.27.1 | 31 Oct 22 16:59 UTC | |
| start | -p multinode-165059-m02 | multinode-165059-m02 | jenkins | v1.27.1 | 31 Oct 22 16:59 UTC | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p multinode-165059-m03 | multinode-165059-m03 | jenkins | v1.27.1 | 31 Oct 22 16:59 UTC | 31 Oct 22 16:59 UTC |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| node | add -p multinode-165059 | multinode-165059 | jenkins | v1.27.1 | 31 Oct 22 16:59 UTC | |
| delete | -p multinode-165059-m03 | multinode-165059-m03 | jenkins | v1.27.1 | 31 Oct 22 16:59 UTC | 31 Oct 22 16:59 UTC |
| delete | -p multinode-165059 | multinode-165059 | jenkins | v1.27.1 | 31 Oct 22 16:59 UTC | 31 Oct 22 16:59 UTC |
| start | -p test-preload-165950 | test-preload-165950 | jenkins | v1.27.1 | 31 Oct 22 16:59 UTC | 31 Oct 22 17:00 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.4 | | | | | |
| ssh | -p test-preload-165950 | test-preload-165950 | jenkins | v1.27.1 | 31 Oct 22 17:00 UTC | 31 Oct 22 17:00 UTC |
| | -- sudo crictl pull | | | | | |
| | gcr.io/k8s-minikube/busybox | | | | | |
| start | -p test-preload-165950 | test-preload-165950 | jenkins | v1.27.1 | 31 Oct 22 17:00 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --wait=true --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.6 | | | | | |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/10/31 17:00:53
Running on machine: ubuntu-20-agent-6
Binary: Built with gc go1.19.2 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1031 17:00:53.400798 123788 out.go:296] Setting OutFile to fd 1 ...
I1031 17:00:53.400923 123788 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 17:00:53.400937 123788 out.go:309] Setting ErrFile to fd 2...
I1031 17:00:53.400944 123788 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 17:00:53.401087 123788 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-3650/.minikube/bin
I1031 17:00:53.401650 123788 out.go:303] Setting JSON to false
I1031 17:00:53.402675 123788 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2603,"bootTime":1667233050,"procs":288,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1031 17:00:53.402746 123788 start.go:126] virtualization: kvm guest
I1031 17:00:53.405697 123788 out.go:177] * [test-preload-165950] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
I1031 17:00:53.407231 123788 out.go:177] - MINIKUBE_LOCATION=15232
I1031 17:00:53.407135 123788 notify.go:220] Checking for updates...
I1031 17:00:53.411021 123788 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1031 17:00:53.412510 123788 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15232-3650/kubeconfig
I1031 17:00:53.414023 123788 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-3650/.minikube
I1031 17:00:53.415484 123788 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1031 17:00:53.417194 123788 config.go:180] Loaded profile config "test-preload-165950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I1031 17:00:53.419061 123788 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
I1031 17:00:53.420384 123788 driver.go:365] Setting default libvirt URI to qemu:///system
I1031 17:00:53.448510 123788 docker.go:137] docker version: linux-20.10.21
I1031 17:00:53.448586 123788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1031 17:00:53.541306 123788 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:39 SystemTime:2022-10-31 17:00:53.467933423 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1031 17:00:53.541406 123788 docker.go:254] overlay module found
I1031 17:00:53.543484 123788 out.go:177] * Using the docker driver based on existing profile
I1031 17:00:53.544875 123788 start.go:282] selected driver: docker
I1031 17:00:53.544894 123788 start.go:808] validating driver "docker" against &{Name:test-preload-165950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-165950 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1031 17:00:53.544985 123788 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1031 17:00:53.545708 123788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1031 17:00:53.643264 123788 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:39 SystemTime:2022-10-31 17:00:53.565995365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660665856 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1031 17:00:53.643528 123788 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1031 17:00:53.643548 123788 cni.go:95] Creating CNI manager for ""
I1031 17:00:53.643554 123788 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I1031 17:00:53.643565 123788 start_flags.go:317] config:
{Name:test-preload-165950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-165950 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1031 17:00:53.645909 123788 out.go:177] * Starting control plane node test-preload-165950 in cluster test-preload-165950
I1031 17:00:53.647496 123788 cache.go:120] Beginning downloading kic base image for docker with containerd
I1031 17:00:53.648990 123788 out.go:177] * Pulling base image ...
I1031 17:00:53.650498 123788 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1031 17:00:53.650525 123788 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
I1031 17:00:53.672685 123788 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
I1031 17:00:53.672711 123788 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
I1031 17:00:53.749918 123788 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
I1031 17:00:53.750010 123788 cache.go:57] Caching tarball of preloaded images
I1031 17:00:53.750392 123788 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1031 17:00:53.752786 123788 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
I1031 17:00:53.754251 123788 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1031 17:00:53.854172 123788 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
I1031 17:00:56.444223 123788 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1031 17:00:56.444331 123788 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1031 17:00:57.333820 123788 cache.go:60] Finished verifying existence of preloaded tar for v1.24.6 on containerd
I1031 17:00:57.333953 123788 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/config.json ...
I1031 17:00:57.334153 123788 cache.go:208] Successfully downloaded all kic artifacts
I1031 17:00:57.334182 123788 start.go:364] acquiring machines lock for test-preload-165950: {Name:mk5e2148763cdda5260ddcfe6c84de7081b8765d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1031 17:00:57.334270 123788 start.go:368] acquired machines lock for "test-preload-165950" in 68.35µs
I1031 17:00:57.334286 123788 start.go:96] Skipping create...Using existing machine configuration
I1031 17:00:57.334291 123788 fix.go:55] fixHost starting:
I1031 17:00:57.334493 123788 cli_runner.go:164] Run: docker container inspect test-preload-165950 --format={{.State.Status}}
I1031 17:00:57.357514 123788 fix.go:103] recreateIfNeeded on test-preload-165950: state=Running err=<nil>
W1031 17:00:57.357546 123788 fix.go:129] unexpected machine state, will restart: <nil>
I1031 17:00:57.360746 123788 out.go:177] * Updating the running docker "test-preload-165950" container ...
I1031 17:00:57.362040 123788 machine.go:88] provisioning docker machine ...
I1031 17:00:57.362068 123788 ubuntu.go:169] provisioning hostname "test-preload-165950"
I1031 17:00:57.362115 123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
I1031 17:00:57.384936 123788 main.go:134] libmachine: Using SSH client type: native
I1031 17:00:57.385100 123788 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49277 <nil> <nil>}
I1031 17:00:57.385117 123788 main.go:134] libmachine: About to run SSH command:
sudo hostname test-preload-165950 && echo "test-preload-165950" | sudo tee /etc/hostname
I1031 17:00:57.508480 123788 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-165950
I1031 17:00:57.508560 123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
I1031 17:00:57.532320 123788 main.go:134] libmachine: Using SSH client type: native
I1031 17:00:57.532481 123788 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49277 <nil> <nil>}
I1031 17:00:57.532510 123788 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\stest-preload-165950' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-165950/g' /etc/hosts;
else
echo '127.0.1.1 test-preload-165950' | sudo tee -a /etc/hosts;
fi
fi
I1031 17:00:57.648181 123788 main.go:134] libmachine: SSH cmd err, output: <nil>:
I1031 17:00:57.648212 123788 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15232-3650/.minikube CaCertPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15232-3650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15232-3650/.minikube}
I1031 17:00:57.648234 123788 ubuntu.go:177] setting up certificates
I1031 17:00:57.648244 123788 provision.go:83] configureAuth start
I1031 17:00:57.648321 123788 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-165950
I1031 17:00:57.672013 123788 provision.go:138] copyHostCerts
I1031 17:00:57.672105 123788 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3650/.minikube/ca.pem, removing ...
I1031 17:00:57.672125 123788 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3650/.minikube/ca.pem
I1031 17:00:57.672195 123788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15232-3650/.minikube/ca.pem (1078 bytes)
I1031 17:00:57.672283 123788 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3650/.minikube/cert.pem, removing ...
I1031 17:00:57.672295 123788 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3650/.minikube/cert.pem
I1031 17:00:57.672323 123788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15232-3650/.minikube/cert.pem (1123 bytes)
I1031 17:00:57.672372 123788 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-3650/.minikube/key.pem, removing ...
I1031 17:00:57.672381 123788 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-3650/.minikube/key.pem
I1031 17:00:57.672407 123788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15232-3650/.minikube/key.pem (1679 bytes)
I1031 17:00:57.672455 123788 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15232-3650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca-key.pem org=jenkins.test-preload-165950 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-165950]
I1031 17:00:57.797650 123788 provision.go:172] copyRemoteCerts
I1031 17:00:57.797711 123788 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1031 17:00:57.797742 123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
I1031 17:00:57.822580 123788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/test-preload-165950/id_rsa Username:docker}
I1031 17:00:57.907487 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1031 17:00:57.925574 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I1031 17:00:57.945093 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1031 17:00:57.962901 123788 provision.go:86] duration metric: configureAuth took 314.615745ms
I1031 17:00:57.962927 123788 ubuntu.go:193] setting minikube options for container-runtime
I1031 17:00:57.963104 123788 config.go:180] Loaded profile config "test-preload-165950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
I1031 17:00:57.963117 123788 machine.go:91] provisioned docker machine in 601.061986ms
I1031 17:00:57.963123 123788 start.go:300] post-start starting for "test-preload-165950" (driver="docker")
I1031 17:00:57.963131 123788 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1031 17:00:57.963167 123788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1031 17:00:57.963199 123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
I1031 17:00:57.987686 123788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/test-preload-165950/id_rsa Username:docker}
I1031 17:00:58.071508 123788 ssh_runner.go:195] Run: cat /etc/os-release
I1031 17:00:58.074511 123788 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1031 17:00:58.074535 123788 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1031 17:00:58.074543 123788 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1031 17:00:58.074549 123788 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I1031 17:00:58.074562 123788 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-3650/.minikube/addons for local assets ...
I1031 17:00:58.074617 123788 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-3650/.minikube/files for local assets ...
I1031 17:00:58.074698 123788 filesync.go:149] local asset: /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem -> 100972.pem in /etc/ssl/certs
I1031 17:00:58.074797 123788 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1031 17:00:58.082460 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem --> /etc/ssl/certs/100972.pem (1708 bytes)
I1031 17:00:58.099618 123788 start.go:303] post-start completed in 136.482468ms
I1031 17:00:58.099687 123788 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1031 17:00:58.099718 123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
I1031 17:00:58.122912 123788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/test-preload-165950/id_rsa Username:docker}
I1031 17:00:58.204709 123788 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1031 17:00:58.208921 123788 fix.go:57] fixHost completed within 874.623341ms
I1031 17:00:58.208952 123788 start.go:83] releasing machines lock for "test-preload-165950", held for 874.669884ms
I1031 17:00:58.209045 123788 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-165950
I1031 17:00:58.231368 123788 ssh_runner.go:195] Run: systemctl --version
I1031 17:00:58.231411 123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
I1031 17:00:58.231475 123788 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I1031 17:00:58.231537 123788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-165950
I1031 17:00:58.254909 123788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/test-preload-165950/id_rsa Username:docker}
I1031 17:00:58.256772 123788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-3650/.minikube/machines/test-preload-165950/id_rsa Username:docker}
I1031 17:00:58.359932 123788 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1031 17:00:58.370867 123788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1031 17:00:58.380533 123788 docker.go:189] disabling docker service ...
I1031 17:00:58.380587 123788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1031 17:00:58.390611 123788 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1031 17:00:58.400540 123788 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1031 17:00:58.503571 123788 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1031 17:00:58.601357 123788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1031 17:00:58.610768 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1031 17:00:58.623982 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
I1031 17:00:58.631971 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
I1031 17:00:58.639948 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
I1031 17:00:58.647731 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
I1031 17:00:58.655857 123788 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1031 17:00:58.662159 123788 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1031 17:00:58.668160 123788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1031 17:00:58.765634 123788 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1031 17:00:58.838270 123788 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
I1031 17:00:58.838340 123788 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1031 17:00:58.842645 123788 start.go:472] Will wait 60s for crictl version
I1031 17:00:58.842710 123788 ssh_runner.go:195] Run: sudo crictl version
I1031 17:00:58.873990 123788 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:
stderr:
time="2022-10-31T17:00:58Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
I1031 17:01:09.921926 123788 ssh_runner.go:195] Run: sudo crictl version
I1031 17:01:09.945289 123788 start.go:481] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.6.9
RuntimeApiVersion: v1alpha2
I1031 17:01:09.945349 123788 ssh_runner.go:195] Run: containerd --version
I1031 17:01:09.970198 123788 ssh_runner.go:195] Run: containerd --version
I1031 17:01:09.996976 123788 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.9 ...
I1031 17:01:09.998646 123788 cli_runner.go:164] Run: docker network inspect test-preload-165950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1031 17:01:10.021855 123788 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I1031 17:01:10.025738 123788 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1031 17:01:10.025795 123788 ssh_runner.go:195] Run: sudo crictl images --output json
I1031 17:01:10.050811 123788 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
I1031 17:01:10.050875 123788 ssh_runner.go:195] Run: which lz4
I1031 17:01:10.053855 123788 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I1031 17:01:10.056765 123788 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I1031 17:01:10.056789 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
I1031 17:01:11.012204 123788 containerd.go:496] Took 0.958385 seconds to copy over tarball
I1031 17:01:11.012279 123788 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I1031 17:01:13.898440 123788 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.886126931s)
I1031 17:01:13.898474 123788 containerd.go:503] Took 2.886238 seconds t extract the tarball
I1031 17:01:13.898485 123788 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1031 17:01:13.924871 123788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1031 17:01:14.027291 123788 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1031 17:01:14.105585 123788 ssh_runner.go:195] Run: sudo crictl images --output json
I1031 17:01:14.153742 123788 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.6 k8s.gcr.io/kube-controller-manager:v1.24.6 k8s.gcr.io/kube-scheduler:v1.24.6 k8s.gcr.io/kube-proxy:v1.24.6 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
I1031 17:01:14.153832 123788 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I1031 17:01:14.153879 123788 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.6
I1031 17:01:14.153933 123788 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.6
I1031 17:01:14.153950 123788 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
I1031 17:01:14.153997 123788 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.6
I1031 17:01:14.154093 123788 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
I1031 17:01:14.154143 123788 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
I1031 17:01:14.154158 123788 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.6
I1031 17:01:14.154858 123788 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.6: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.6
I1031 17:01:14.154930 123788 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.6: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.6
I1031 17:01:14.155027 123788 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
I1031 17:01:14.155037 123788 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
I1031 17:01:14.155035 123788 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I1031 17:01:14.155041 123788 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
I1031 17:01:14.154859 123788 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.6: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.6
I1031 17:01:14.155056 123788 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.6: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.6
I1031 17:01:14.639297 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
I1031 17:01:14.649105 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6"
I1031 17:01:14.661797 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
I1031 17:01:14.676815 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
I1031 17:01:14.688769 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.6"
I1031 17:01:14.693655 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.6"
I1031 17:01:14.714906 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6"
I1031 17:01:14.949489 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
I1031 17:01:15.471396 123788 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
I1031 17:01:15.471444 123788 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
I1031 17:01:15.471487 123788 ssh_runner.go:195] Run: which crictl
I1031 17:01:15.667668 123788 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6": (1.005826513s)
I1031 17:01:15.667922 123788 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
I1031 17:01:15.667990 123788 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
I1031 17:01:15.668043 123788 ssh_runner.go:195] Run: which crictl
I1031 17:01:15.667834 123788 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
I1031 17:01:15.668185 123788 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
I1031 17:01:15.668229 123788 ssh_runner.go:195] Run: which crictl
I1031 17:01:15.667889 123788 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6": (1.018754573s)
I1031 17:01:15.668329 123788 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.6" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.6" does not exist at hash "c786c777a4e1c21907e77042428837645fa382d3bd14925cf78f0d163d6d332e" in container runtime
I1031 17:01:15.668357 123788 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.6
I1031 17:01:15.668378 123788 ssh_runner.go:195] Run: which crictl
I1031 17:01:15.675016 123788 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.6" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.6" does not exist at hash "860f263331c9513ddab44d4d8a9a4a7304313b3aa0776decc1d7fc6acdd69ab0" in container runtime
I1031 17:01:15.675057 123788 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.6
I1031 17:01:15.675083 123788 ssh_runner.go:195] Run: which crictl
I1031 17:01:15.748343 123788 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.6": (1.05465106s)
I1031 17:01:15.748403 123788 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.6" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.6" does not exist at hash "0bb39497ab33bb5f8aaff88ced53a5fcd360fcf5da647609619d4f5c8f1483d2" in container runtime
I1031 17:01:15.748433 123788 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.6
I1031 17:01:15.748479 123788 ssh_runner.go:195] Run: which crictl
I1031 17:01:15.773417 123788 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6": (1.058470688s)
I1031 17:01:15.773475 123788 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.6" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.6" does not exist at hash "c6c20157a42337ecb7675be59e1dc34bc5a91288c7eeac1e30ec97767a9055eb" in container runtime
I1031 17:01:15.773543 123788 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.6
I1031 17:01:15.773610 123788 ssh_runner.go:195] Run: which crictl
I1031 17:01:15.796393 123788 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
I1031 17:01:15.796447 123788 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I1031 17:01:15.796450 123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
I1031 17:01:15.796474 123788 ssh_runner.go:195] Run: which crictl
I1031 17:01:15.796543 123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
I1031 17:01:15.796574 123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6
I1031 17:01:15.796615 123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
I1031 17:01:15.796661 123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6
I1031 17:01:15.796762 123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6
I1031 17:01:15.796793 123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6
I1031 17:01:15.849303 123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1031 17:01:16.518326 123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
I1031 17:01:16.518410 123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6
I1031 17:01:16.518448 123788 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
I1031 17:01:16.518466 123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
I1031 17:01:16.518546 123788 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
I1031 17:01:16.518609 123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
I1031 17:01:16.518661 123788 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
I1031 17:01:16.518667 123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6
I1031 17:01:16.519958 123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6
I1031 17:01:16.520022 123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6
I1031 17:01:16.520164 123788 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
I1031 17:01:16.520245 123788 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
I1031 17:01:16.522338 123788 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
I1031 17:01:16.522367 123788 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
I1031 17:01:16.522400 123788 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
I1031 17:01:16.522738 123788 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
I1031 17:01:16.522918 123788 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
I1031 17:01:16.523532 123788 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
I1031 17:01:23.289265 123788 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (6.766830961s)
I1031 17:01:23.289325 123788 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
I1031 17:01:23.289354 123788 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.8.6
I1031 17:01:23.289408 123788 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
I1031 17:01:24.806710 123788 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6: (1.517273083s)
I1031 17:01:24.806742 123788 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
I1031 17:01:24.806797 123788 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.7
I1031 17:01:24.806862 123788 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
I1031 17:01:24.985051 123788 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
I1031 17:01:24.985104 123788 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I1031 17:01:24.985171 123788 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
I1031 17:01:25.471171 123788 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I1031 17:01:25.471237 123788 cache_images.go:92] LoadImages completed in 11.317456964s
W1031 17:01:25.471403 123788 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15232-3650/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6: no such file or directory
I1031 17:01:25.471469 123788 ssh_runner.go:195] Run: sudo crictl info
I1031 17:01:25.549548 123788 cni.go:95] Creating CNI manager for ""
I1031 17:01:25.549585 123788 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I1031 17:01:25.549601 123788 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1031 17:01:25.549618 123788 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-165950 NodeName:test-preload-165950 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I1031 17:01:25.549786 123788 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "test-preload-165950"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1031 17:01:25.549897 123788 kubeadm.go:962] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-165950 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.6 ClusterName:test-preload-165950 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1031 17:01:25.549966 123788 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
I1031 17:01:25.559048 123788 binaries.go:44] Found k8s binaries, skipping transfer
I1031 17:01:25.559118 123788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1031 17:01:25.568146 123788 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (512 bytes)
I1031 17:01:25.583110 123788 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1031 17:01:25.598681 123788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
I1031 17:01:25.662413 123788 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I1031 17:01:25.666268 123788 certs.go:54] Setting up /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950 for IP: 192.168.67.2
I1031 17:01:25.666403 123788 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15232-3650/.minikube/ca.key
I1031 17:01:25.666458 123788 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15232-3650/.minikube/proxy-client-ca.key
I1031 17:01:25.666558 123788 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/client.key
I1031 17:01:25.666633 123788 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/apiserver.key.c7fa3a9e
I1031 17:01:25.666689 123788 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/proxy-client.key
I1031 17:01:25.666801 123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/10097.pem (1338 bytes)
W1031 17:01:25.666847 123788 certs.go:384] ignoring /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/10097_empty.pem, impossibly tiny 0 bytes
I1031 17:01:25.666873 123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca-key.pem (1679 bytes)
I1031 17:01:25.666908 123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/ca.pem (1078 bytes)
I1031 17:01:25.666943 123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/cert.pem (1123 bytes)
I1031 17:01:25.666974 123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/certs/home/jenkins/minikube-integration/15232-3650/.minikube/certs/key.pem (1679 bytes)
I1031 17:01:25.667033 123788 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem (1708 bytes)
I1031 17:01:25.667673 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1031 17:01:25.690455 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1031 17:01:25.763539 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1031 17:01:25.790140 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1031 17:01:25.861083 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1031 17:01:25.879599 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1031 17:01:25.898515 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1031 17:01:25.922119 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1031 17:01:25.959078 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/files/etc/ssl/certs/100972.pem --> /usr/share/ca-certificates/100972.pem (1708 bytes)
I1031 17:01:25.980032 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1031 17:01:26.000424 123788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-3650/.minikube/certs/10097.pem --> /usr/share/ca-certificates/10097.pem (1338 bytes)
I1031 17:01:26.053381 123788 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1031 17:01:26.067535 123788 ssh_runner.go:195] Run: openssl version
I1031 17:01:26.072627 123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1031 17:01:26.080989 123788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1031 17:01:26.085427 123788 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 31 16:37 /usr/share/ca-certificates/minikubeCA.pem
I1031 17:01:26.085503 123788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1031 17:01:26.091369 123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1031 17:01:26.099802 123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10097.pem && ln -fs /usr/share/ca-certificates/10097.pem /etc/ssl/certs/10097.pem"
I1031 17:01:26.108642 123788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10097.pem
I1031 17:01:26.112303 123788 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 31 16:41 /usr/share/ca-certificates/10097.pem
I1031 17:01:26.112374 123788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10097.pem
I1031 17:01:26.125705 123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10097.pem /etc/ssl/certs/51391683.0"
I1031 17:01:26.133946 123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100972.pem && ln -fs /usr/share/ca-certificates/100972.pem /etc/ssl/certs/100972.pem"
I1031 17:01:26.142159 123788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100972.pem
I1031 17:01:26.145685 123788 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 31 16:41 /usr/share/ca-certificates/100972.pem
I1031 17:01:26.145748 123788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100972.pem
I1031 17:01:26.150967 123788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100972.pem /etc/ssl/certs/3ec20f2e.0"
I1031 17:01:26.158917 123788 kubeadm.go:396] StartCluster: {Name:test-preload-165950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-165950 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1031 17:01:26.159010 123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1031 17:01:26.159074 123788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1031 17:01:26.185271 123788 cri.go:87] found id: "9b493051380ea5f84db2bf6d6b500816b4bfc7d73549a3fc267337671408794c"
I1031 17:01:26.185298 123788 cri.go:87] found id: "9523dbf74df3ff703859059525cf2e837089463bffc76ca75ed4636d64233fbf"
I1031 17:01:26.185306 123788 cri.go:87] found id: "92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e"
I1031 17:01:26.185314 123788 cri.go:87] found id: ""
I1031 17:01:26.185368 123788 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I1031 17:01:26.219799 123788 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"08af09ff6dc7343fadd5f527821607e0a139864f2cf045f41ddb8a637dd3684e","pid":2647,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08af09ff6dc7343fadd5f527821607e0a139864f2cf045f41ddb8a637dd3684e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08af09ff6dc7343fadd5f527821607e0a139864f2cf045f41ddb8a637dd3684e/rootfs","created":"2022-10-31T17:00:48.864140497Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-8wsrc","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489","pid":2192,"status":"running",
"bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489/rootfs","created":"2022-10-31T17:00:31.051060479Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-54b5q_0ff95637-a367-440b-918f-495391f2f1cf","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-54b5q","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0e1033e5cdbc18bbf5d5b9fb465ba7904c14d9cf096e385708e56a44984ea823","pid":1627,"status":"running","bundle":"/ru
n/containerd/io.containerd.runtime.v2.task/k8s.io/0e1033e5cdbc18bbf5d5b9fb465ba7904c14d9cf096e385708e56a44984ea823","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e1033e5cdbc18bbf5d5b9fb465ba7904c14d9cf096e385708e56a44984ea823/rootfs","created":"2022-10-31T17:00:11.805153802Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri.sandbox-id":"bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865","io.kubernetes.cri.sandbox-name":"etcd-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0f97f1cbba7302aeb3085b591e2b35bc859465b29b1dbeeabec247e6d5bae53f","pid":1649,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f97f1cbba7302aeb3085b591e2b35bc859465b29b1dbeeabec247e6d5bae53f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f97f1cbba7302aeb3085
b591e2b35bc859465b29b1dbeeabec247e6d5bae53f/rootfs","created":"2022-10-31T17:00:11.813683549Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-apiserver:v1.24.4","io.kubernetes.cri.sandbox-id":"ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322","pid":3587,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322/rootfs","created":"2022-10-31T17:01:17.561189062Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.
cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-165950_745aa6453df7e4d7a2bedb8ef855e2b8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95","pid":3592,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95/rootfs","created":"2022-10-31T17:01:17.563577882Z","annotations":{"io.kubernetes.cri.container-type":"san
dbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-54b5q_0ff95637-a367-440b-918f-495391f2f1cf","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-54b5q","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4","pid":3582,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4/rootfs","created":"2022-10-31T17:01:17.563663092Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kub
ernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-8wsrc_8e76d465-ae9a-4121-b7ed-1ef94dd20b7e","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-8wsrc","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4c13d0592c58fca24c43401e993684c5570ddc76b74d030729c3dcc469b40b53","pid":2448,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c13d0592c58fca24c43401e993684c5570ddc76b74d030729c3dcc469b40b53","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c13d0592c58fca24c43401e993684c5570ddc76b74d030729c3dcc469b40b53/rootfs","created":"2022-10-31T17:00:34.399637747Z","annotations":{"io.kubernetes.cri.container-name":"kin
dnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20221004-44d545d1","io.kubernetes.cri.sandbox-id":"e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4","io.kubernetes.cri.sandbox-name":"kindnet-jljff","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2","pid":1514,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2/rootfs","created":"2022-10-31T17:00:11.599833796Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"534d5230b
843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-165950_f7f285bbceeae66435f07854fddd011c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"715100db6ef995efee732d28990297ac96df6f601c86ae90c8ca69f54dc02d49","pid":1640,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/715100db6ef995efee732d28990297ac96df6f601c86ae90c8ca69f54dc02d49","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/715100db6ef995efee732d28990297ac96df6f601c86ae90c8ca69f54dc02d49/rootfs","created":"2022-10-31T17:00:11.816128062Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-controller-manager:v1.24.4
","io.kubernetes.cri.sandbox-id":"534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f","pid":3586,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f/rootfs","created":"2022-10-31T17:01:17.562640113Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f","io.kubernetes.cri.sandbox-log-directory":"
/var/log/pods/kube-system_kindnet-jljff_e66c31a9-8e36-4914-a086-32ba2b3dc004","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-jljff","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1","pid":2589,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1/rootfs","created":"2022-10-31T17:00:48.750343759Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pod
s/kube-system_storage-provisioner_5031015c-081e-49e2-8d46-09fd879a755c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"91216b43bb46df97ee8f7081d129445a14177f38a7784ca329cb279b5be6d0bf","pid":2648,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/91216b43bb46df97ee8f7081d129445a14177f38a7784ca329cb279b5be6d0bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/91216b43bb46df97ee8f7081d129445a14177f38a7784ca329cb279b5be6d0bf/rootfs","created":"2022-10-31T17:00:48.864144417Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.k
ubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e","pid":3774,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e/rootfs","created":"2022-10-31T17:01:22.960016255Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383","pid":151
2,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383/rootfs","created":"2022-10-31T17:00:11.597466189Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-165950_745aa6453df7e4d7a2bedb8ef855e2b8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2
c9ac6a5383","pid":3588,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2c9ac6a5383","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2c9ac6a5383/rootfs","created":"2022-10-31T17:01:17.560848225Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2c9ac6a5383","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-165950_8a2a3eb7a75eb7f169392f7d77b36d78","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb1
5588717a8a9","pid":1513,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9/rootfs","created":"2022-10-31T17:00:11.599047477Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-165950_f04a99c5aa78b1fe8d30a6291f8f68f1","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bd161590496d96dfd772253e8fc04aa2
ace241cd015a3e030edb9980f0002865","pid":1515,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865/rootfs","created":"2022-10-31T17:00:11.599316909Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-165950_8a2a3eb7a75eb7f169392f7d77b36d78","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c1d26ec1a24e08c41b4eed6cd4a281a
528dd2a96323f389584c153ebdccd783f","pid":3487,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1d26ec1a24e08c41b4eed6cd4a281a528dd2a96323f389584c153ebdccd783f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1d26ec1a24e08c41b4eed6cd4a281a528dd2a96323f389584c153ebdccd783f/rootfs","created":"2022-10-31T17:01:17.250859Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"c1d26ec1a24e08c41b4eed6cd4a281a528dd2a96323f389584c153ebdccd783f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-165950_f04a99c5aa78b1fe8d30a6291f8f68f1","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c762f46164888
748a2ca9e5a525e7de747208a14141e10d7830e8dab3c7f2173","pid":2229,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c762f46164888748a2ca9e5a525e7de747208a14141e10d7830e8dab3c7f2173","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c762f46164888748a2ca9e5a525e7de747208a14141e10d7830e8dab3c7f2173/rootfs","created":"2022-10-31T17:00:31.186453275Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-proxy:v1.24.4","io.kubernetes.cri.sandbox-id":"08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489","io.kubernetes.cri.sandbox-name":"kube-proxy-54b5q","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ca1bd3bcca0419f20c372cc6baa843c8373d756258258a9103334a22b452c16d","pid":1648,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca1bd3bcca0419f20c372cc6baa843c8373d756258258a9103
334a22b452c16d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca1bd3bcca0419f20c372cc6baa843c8373d756258258a9103334a22b452c16d/rootfs","created":"2022-10-31T17:00:11.813286817Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-scheduler:v1.24.4","io.kubernetes.cri.sandbox-id":"a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3","pid":3530,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3/rootfs","created":"2022-10
-31T17:01:17.36677191Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_5031015c-081e-49e2-8d46-09fd879a755c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2","pid":2590,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2/rootfs","created":"2022-10-31T17:0
0:48.752117294Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-8wsrc_8e76d465-ae9a-4121-b7ed-1ef94dd20b7e","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-8wsrc","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7","pid":3398,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7/rootfs","created":"20
22-10-31T17:01:17.15855458Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-165950_f7f285bbceeae66435f07854fddd011c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-165950","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4","pid":2191,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb
9fa200bb8ba27ef4/rootfs","created":"2022-10-31T17:00:31.051134048Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-jljff_e66c31a9-8e36-4914-a086-32ba2b3dc004","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-jljff","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"}]
I1031 17:01:26.220196 123788 cri.go:124] list returned 25 containers
I1031 17:01:26.220215 123788 cri.go:127] container: {ID:08af09ff6dc7343fadd5f527821607e0a139864f2cf045f41ddb8a637dd3684e Status:running}
I1031 17:01:26.220253 123788 cri.go:129] skipping 08af09ff6dc7343fadd5f527821607e0a139864f2cf045f41ddb8a637dd3684e - not in ps
I1031 17:01:26.220265 123788 cri.go:127] container: {ID:08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489 Status:running}
I1031 17:01:26.220285 123788 cri.go:129] skipping 08b4cd316b9b6815f404ec4b186454048a3e6989dcb7c0423dfdfc17e82f6489 - not in ps
I1031 17:01:26.220298 123788 cri.go:127] container: {ID:0e1033e5cdbc18bbf5d5b9fb465ba7904c14d9cf096e385708e56a44984ea823 Status:running}
I1031 17:01:26.220316 123788 cri.go:129] skipping 0e1033e5cdbc18bbf5d5b9fb465ba7904c14d9cf096e385708e56a44984ea823 - not in ps
I1031 17:01:26.220327 123788 cri.go:127] container: {ID:0f97f1cbba7302aeb3085b591e2b35bc859465b29b1dbeeabec247e6d5bae53f Status:running}
I1031 17:01:26.220336 123788 cri.go:129] skipping 0f97f1cbba7302aeb3085b591e2b35bc859465b29b1dbeeabec247e6d5bae53f - not in ps
I1031 17:01:26.220347 123788 cri.go:127] container: {ID:10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322 Status:running}
I1031 17:01:26.220360 123788 cri.go:129] skipping 10c4cf73c3ac649d2cce0512474075cbd3f6123e0f57b48360ef852593f8b322 - not in ps
I1031 17:01:26.220369 123788 cri.go:127] container: {ID:1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95 Status:running}
I1031 17:01:26.220377 123788 cri.go:129] skipping 1ba56f1620b20d5793015175f468306b30d9515d63ca39cac54cc35a02a55d95 - not in ps
I1031 17:01:26.220385 123788 cri.go:127] container: {ID:24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4 Status:running}
I1031 17:01:26.220398 123788 cri.go:129] skipping 24e78df27901fc792eb7252319ce689dd890b0de8103efd94a809d5e34ef32b4 - not in ps
I1031 17:01:26.220409 123788 cri.go:127] container: {ID:4c13d0592c58fca24c43401e993684c5570ddc76b74d030729c3dcc469b40b53 Status:running}
I1031 17:01:26.220422 123788 cri.go:129] skipping 4c13d0592c58fca24c43401e993684c5570ddc76b74d030729c3dcc469b40b53 - not in ps
I1031 17:01:26.220433 123788 cri.go:127] container: {ID:534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2 Status:running}
I1031 17:01:26.220445 123788 cri.go:129] skipping 534d5230b843ffe567d7578d4a3512ebb385a7ef30b8d6ee15fe4d4b23effeb2 - not in ps
I1031 17:01:26.220456 123788 cri.go:127] container: {ID:715100db6ef995efee732d28990297ac96df6f601c86ae90c8ca69f54dc02d49 Status:running}
I1031 17:01:26.220468 123788 cri.go:129] skipping 715100db6ef995efee732d28990297ac96df6f601c86ae90c8ca69f54dc02d49 - not in ps
I1031 17:01:26.220479 123788 cri.go:127] container: {ID:72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f Status:running}
I1031 17:01:26.220491 123788 cri.go:129] skipping 72c31c9f4fc4e7f506dc770bfda502f0386b9d6cacd1ad2e55fe22dc5b40071f - not in ps
I1031 17:01:26.220498 123788 cri.go:127] container: {ID:8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1 Status:running}
I1031 17:01:26.220510 123788 cri.go:129] skipping 8370f6b2249707160393b54a7c52a463ec41c53aa1abcf098ef1833e2d80e4f1 - not in ps
I1031 17:01:26.220522 123788 cri.go:127] container: {ID:91216b43bb46df97ee8f7081d129445a14177f38a7784ca329cb279b5be6d0bf Status:running}
I1031 17:01:26.220540 123788 cri.go:129] skipping 91216b43bb46df97ee8f7081d129445a14177f38a7784ca329cb279b5be6d0bf - not in ps
I1031 17:01:26.220551 123788 cri.go:127] container: {ID:92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e Status:running}
I1031 17:01:26.220564 123788 cri.go:133] skipping {92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e running}: state = "running", want "paused"
I1031 17:01:26.220578 123788 cri.go:127] container: {ID:a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383 Status:running}
I1031 17:01:26.220590 123788 cri.go:129] skipping a0e0dbf2607c88f0112cc09da469ea6d91afc209b0b43752a7589ed355a42383 - not in ps
I1031 17:01:26.220601 123788 cri.go:127] container: {ID:ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2c9ac6a5383 Status:running}
I1031 17:01:26.220614 123788 cri.go:129] skipping ac7640ccdb2753e897ebe26202c3f06faba50fd0471aaf7268dea2c9ac6a5383 - not in ps
I1031 17:01:26.220625 123788 cri.go:127] container: {ID:ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9 Status:running}
I1031 17:01:26.220637 123788 cri.go:129] skipping ad15e41729d782e1165f600243de9c56f425cdb5db116cf881eb15588717a8a9 - not in ps
I1031 17:01:26.220648 123788 cri.go:127] container: {ID:bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865 Status:running}
I1031 17:01:26.220660 123788 cri.go:129] skipping bd161590496d96dfd772253e8fc04aa2ace241cd015a3e030edb9980f0002865 - not in ps
I1031 17:01:26.220670 123788 cri.go:127] container: {ID:c1d26ec1a24e08c41b4eed6cd4a281a528dd2a96323f389584c153ebdccd783f Status:running}
I1031 17:01:26.220679 123788 cri.go:129] skipping c1d26ec1a24e08c41b4eed6cd4a281a528dd2a96323f389584c153ebdccd783f - not in ps
I1031 17:01:26.220689 123788 cri.go:127] container: {ID:c762f46164888748a2ca9e5a525e7de747208a14141e10d7830e8dab3c7f2173 Status:running}
I1031 17:01:26.220702 123788 cri.go:129] skipping c762f46164888748a2ca9e5a525e7de747208a14141e10d7830e8dab3c7f2173 - not in ps
I1031 17:01:26.220712 123788 cri.go:127] container: {ID:ca1bd3bcca0419f20c372cc6baa843c8373d756258258a9103334a22b452c16d Status:running}
I1031 17:01:26.220724 123788 cri.go:129] skipping ca1bd3bcca0419f20c372cc6baa843c8373d756258258a9103334a22b452c16d - not in ps
I1031 17:01:26.220735 123788 cri.go:127] container: {ID:d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3 Status:running}
I1031 17:01:26.220749 123788 cri.go:129] skipping d14cbe31893c40cc251f526b6f68b2f65ff3d392117a6bdaf1ae8266373867d3 - not in ps
I1031 17:01:26.220764 123788 cri.go:127] container: {ID:ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2 Status:running}
I1031 17:01:26.220776 123788 cri.go:129] skipping ddd9b9fed95f4ffb40c4492a0807846ab0d1f6762b0d1b8ddef6804023ccf4d2 - not in ps
I1031 17:01:26.220787 123788 cri.go:127] container: {ID:debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7 Status:running}
I1031 17:01:26.220800 123788 cri.go:129] skipping debeed95bef4949487f42a531f238fb279e4d6743e734f2eade3a2424ececba7 - not in ps
I1031 17:01:26.220811 123788 cri.go:127] container: {ID:e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4 Status:running}
I1031 17:01:26.220823 123788 cri.go:129] skipping e7a1e00234ba1ca933146b83693ba6b5ab619fdcb5e23efb9fa200bb8ba27ef4 - not in ps
I1031 17:01:26.220874 123788 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1031 17:01:26.228503 123788 kubeadm.go:411] found existing configuration files, will attempt cluster restart
I1031 17:01:26.228526 123788 kubeadm.go:627] restartCluster start
I1031 17:01:26.228569 123788 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1031 17:01:26.242514 123788 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1031 17:01:26.243313 123788 kubeconfig.go:92] found "test-preload-165950" server: "https://192.168.67.2:8443"
I1031 17:01:26.244383 123788 kapi.go:59] client config for test-preload-165950: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/client.crt", KeyFile:"/home/jenkins/minikube-integration/15232-3650/.minikube/profiles/test-preload-165950/client.key", CAFile:"/home/jenkins/minikube-integration/15232-3650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1782ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1031 17:01:26.245028 123788 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1031 17:01:26.254439 123788 kubeadm.go:594] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml 2022-10-31 17:00:07.362490176 +0000
+++ /var/tmp/minikube/kubeadm.yaml.new 2022-10-31 17:01:25.658180104 +0000
@@ -38,7 +38,7 @@
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
-kubernetesVersion: v1.24.4
+kubernetesVersion: v1.24.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
-- /stdout --
I1031 17:01:26.254466 123788 kubeadm.go:1114] stopping kube-system containers ...
I1031 17:01:26.254477 123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I1031 17:01:26.254530 123788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1031 17:01:26.279826 123788 cri.go:87] found id: "9b493051380ea5f84db2bf6d6b500816b4bfc7d73549a3fc267337671408794c"
I1031 17:01:26.279858 123788 cri.go:87] found id: "9523dbf74df3ff703859059525cf2e837089463bffc76ca75ed4636d64233fbf"
I1031 17:01:26.279865 123788 cri.go:87] found id: "92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e"
I1031 17:01:26.279880 123788 cri.go:87] found id: ""
I1031 17:01:26.279886 123788 cri.go:232] Stopping containers: [9b493051380ea5f84db2bf6d6b500816b4bfc7d73549a3fc267337671408794c 9523dbf74df3ff703859059525cf2e837089463bffc76ca75ed4636d64233fbf 92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e]
I1031 17:01:26.279928 123788 ssh_runner.go:195] Run: which crictl
I1031 17:01:26.283140 123788 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 9b493051380ea5f84db2bf6d6b500816b4bfc7d73549a3fc267337671408794c 9523dbf74df3ff703859059525cf2e837089463bffc76ca75ed4636d64233fbf 92b6c20028aecde8056070fdc9eb1bb6b58669b7a5c0f9fd0e2c615a73d1898e
I1031 17:01:26.348311 123788 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I1031 17:01:26.415283 123788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1031 17:01:26.422710 123788 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5643 Oct 31 17:00 /etc/kubernetes/admin.conf
-rw------- 1 root root 5656 Oct 31 17:00 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2015 Oct 31 17:00 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5600 Oct 31 17:00 /etc/kubernetes/scheduler.conf
I1031 17:01:26.422771 123788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1031 17:01:26.429820 123788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1031 17:01:26.436664 123788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1031 17:01:26.443399 123788 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I1031 17:01:26.443466 123788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1031 17:01:26.450583 123788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1031 17:01:26.457143 123788 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I1031 17:01:26.457191 123788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1031 17:01:26.463634 123788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1031 17:01:26.471032 123788 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I1031 17:01:26.471057 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I1031 17:01:26.714848 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I1031 17:01:27.216857 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I1031 17:01:27.525201 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I1031 17:01:27.574451 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I1031 17:01:27.654849 123788 api_server.go:51] waiting for apiserver process to appear ...
I1031 17:01:27.654955 123788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1031 17:01:27.673572 123788 api_server.go:71] duration metric: took 18.72073ms to wait for apiserver process to appear ...
I1031 17:01:27.673610 123788 api_server.go:87] waiting for apiserver healthz status ...
I1031 17:01:27.673630 123788 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I1031 17:01:27.678700 123788 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
ok
I1031 17:01:27.685812 123788 api_server.go:140] control plane version: v1.24.4
W1031 17:01:27.685841 123788 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1031 17:01:28.187382 123788 api_server.go:140] control plane version: v1.24.4
W1031 17:01:28.187416 123788 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1031 17:01:28.687372 123788 api_server.go:140] control plane version: v1.24.4
W1031 17:01:28.687411 123788 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1031 17:01:29.187825 123788 api_server.go:140] control plane version: v1.24.4
W1031 17:01:29.187861 123788 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1031 17:01:29.687064 123788 api_server.go:140] control plane version: v1.24.4
W1031 17:01:29.687093 123788 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
W1031 17:01:30.187422 123788 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1031 17:01:30.686425 123788 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1031 17:01:31.186366 123788 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
I1031 17:01:35.664099 123788 api_server.go:140] control plane version: v1.24.6
I1031 17:01:35.664206 123788 api_server.go:130] duration metric: took 7.990587678s to wait for apiserver health ...
I1031 17:01:35.664232 123788 cni.go:95] Creating CNI manager for ""
I1031 17:01:35.664274 123788 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I1031 17:01:35.666396 123788 out.go:177] * Configuring CNI (Container Networking Interface) ...
I1031 17:01:35.668255 123788 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I1031 17:01:35.857942 123788 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.6/kubectl ...
I1031 17:01:35.857986 123788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I1031 17:01:35.965517 123788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1031 17:01:37.314933 123788 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.349353719s)
I1031 17:01:37.314969 123788 system_pods.go:43] waiting for kube-system pods to appear ...
I1031 17:01:37.323258 123788 system_pods.go:59] 8 kube-system pods found
I1031 17:01:37.323308 123788 system_pods.go:61] "coredns-6d4b75cb6d-8wsrc" [8e76d465-ae9a-4121-b7ed-1ef94dd20b7e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1031 17:01:37.323319 123788 system_pods.go:61] "etcd-test-preload-165950" [1738672d-0339-423c-9013-d39e8cbb16c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1031 17:01:37.323333 123788 system_pods.go:61] "kindnet-jljff" [e66c31a9-8e36-4914-a086-32ba2b3dc004] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I1031 17:01:37.323348 123788 system_pods.go:61] "kube-apiserver-test-preload-165950" [a505e0cf-4d56-47bf-865b-6052277ce195] Pending
I1031 17:01:37.323358 123788 system_pods.go:61] "kube-controller-manager-test-preload-165950" [ebf46104-24d9-427e-b5af-643a80e0aceb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1031 17:01:37.323374 123788 system_pods.go:61] "kube-proxy-54b5q" [0ff95637-a367-440b-918f-495391f2f1cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1031 17:01:37.323384 123788 system_pods.go:61] "kube-scheduler-test-preload-165950" [5a7cd673-4c3a-4123-9be5-5f44a196a478] Pending
I1031 17:01:37.323397 123788 system_pods.go:61] "storage-provisioner" [5031015c-081e-49e2-8d46-09fd879a755c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1031 17:01:37.323409 123788 system_pods.go:74] duration metric: took 8.433081ms to wait for pod list to return data ...
I1031 17:01:37.323422 123788 node_conditions.go:102] verifying NodePressure condition ...
I1031 17:01:37.326311 123788 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1031 17:01:37.326342 123788 node_conditions.go:123] node cpu capacity is 8
I1031 17:01:37.326356 123788 node_conditions.go:105] duration metric: took 2.929267ms to run NodePressure ...
I1031 17:01:37.326375 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I1031 17:01:37.573644 123788 kubeadm.go:763] waiting for restarted kubelet to initialise ...
I1031 17:01:37.578158 123788 kubeadm.go:778] kubelet initialised
I1031 17:01:37.578189 123788 kubeadm.go:779] duration metric: took 4.510409ms waiting for restarted kubelet to initialise ...
I1031 17:01:37.578198 123788 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1031 17:01:37.583642 123788 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-8wsrc" in "kube-system" namespace to be "Ready" ...
I1031 17:01:39.594948 123788 pod_ready.go:102] pod "coredns-6d4b75cb6d-8wsrc" in "kube-system" namespace has status "Ready":"False"
I1031 17:01:42.094075 123788 pod_ready.go:102] pod "coredns-6d4b75cb6d-8wsrc" in "kube-system" namespace has status "Ready":"False"
I1031 17:01:43.095366 123788 pod_ready.go:92] pod "coredns-6d4b75cb6d-8wsrc" in "kube-system" namespace has status "Ready":"True"
I1031 17:01:43.095404 123788 pod_ready.go:81] duration metric: took 5.511730023s waiting for pod "coredns-6d4b75cb6d-8wsrc" in "kube-system" namespace to be "Ready" ...
I1031 17:01:43.095417 123788 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-165950" in "kube-system" namespace to be "Ready" ...
I1031 17:01:45.107196 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:01:47.606767 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:01:50.106591 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:01:52.606128 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:01:55.106948 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:01:57.606675 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:01:59.606942 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:01.607143 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:03.607189 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:06.106997 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:08.606022 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:10.607066 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:12.607191 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:15.106122 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:17.106164 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:19.106356 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:21.106711 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:23.606999 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:26.106549 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:28.107170 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:30.606839 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:33.106308 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:35.606836 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:38.106617 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:40.107031 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:42.606997 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:45.105907 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:47.106139 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:49.606661 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:51.607461 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:54.107427 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:56.607579 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:02:59.106638 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:01.106850 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:03.606788 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:05.606874 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:08.106321 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:10.106538 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:12.106959 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:14.607205 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:16.607305 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:19.105988 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:21.106170 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:23.107105 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:25.607263 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:28.106356 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:30.107148 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:32.606490 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:35.105741 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:37.106647 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:39.106715 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:41.606595 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:44.106322 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:46.106599 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:48.106645 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:50.607046 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:53.106597 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:55.607036 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:03:58.106177 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:00.106478 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:02.106672 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:04.106777 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:06.606029 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:08.606391 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:10.606890 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:13.105929 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:15.106871 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:17.605837 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:19.606273 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:21.606690 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:23.608947 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:26.106036 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:28.106069 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:30.106922 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:32.606315 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:34.606779 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:36.607034 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:39.106139 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:41.106298 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:43.106379 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:45.606574 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:47.606629 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:50.106351 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:52.606744 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:55.106115 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:04:57.606837 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:00.107089 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:02.606977 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:05.106235 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:07.106494 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:09.606180 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:11.607064 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:14.106300 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:16.106339 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:18.605987 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:20.606927 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:23.106287 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:25.606564 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:28.106222 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:30.106425 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:32.607544 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:35.105790 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:37.106524 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:39.106668 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:41.606128 123788 pod_ready.go:102] pod "etcd-test-preload-165950" in "kube-system" namespace has status "Ready":"False"
I1031 17:05:43.100897 123788 pod_ready.go:81] duration metric: took 4m0.005465717s waiting for pod "etcd-test-preload-165950" in "kube-system" namespace to be "Ready" ...
E1031 17:05:43.100926 123788 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "etcd-test-preload-165950" in "kube-system" namespace to be "Ready" (will not retry!)
I1031 17:05:43.100947 123788 pod_ready.go:38] duration metric: took 4m5.522739337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1031 17:05:43.100986 123788 kubeadm.go:631] restartCluster took 4m16.872448037s
W1031 17:05:43.101155 123788 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
I1031 17:05:43.101190 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1031 17:05:44.844963 123788 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.743753735s)
I1031 17:05:44.845025 123788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1031 17:05:44.855523 123788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1031 17:05:44.862648 123788 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1031 17:05:44.862707 123788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1031 17:05:44.870144 123788 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1031 17:05:44.870199 123788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1031 17:05:44.907996 123788 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
I1031 17:05:44.908047 123788 kubeadm.go:317] [preflight] Running pre-flight checks
I1031 17:05:44.935802 123788 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
I1031 17:05:44.935928 123788 kubeadm.go:317] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
I1031 17:05:44.935973 123788 kubeadm.go:317] [0;37mOS[0m: [0;32mLinux[0m
I1031 17:05:44.936020 123788 kubeadm.go:317] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1031 17:05:44.936060 123788 kubeadm.go:317] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1031 17:05:44.936139 123788 kubeadm.go:317] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1031 17:05:44.936189 123788 kubeadm.go:317] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1031 17:05:44.936256 123788 kubeadm.go:317] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1031 17:05:44.936353 123788 kubeadm.go:317] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1031 17:05:44.936421 123788 kubeadm.go:317] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1031 17:05:44.936478 123788 kubeadm.go:317] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1031 17:05:44.936542 123788 kubeadm.go:317] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1031 17:05:45.016629 123788 kubeadm.go:317] W1031 17:05:44.903005 6621 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I1031 17:05:45.016840 123788 kubeadm.go:317] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
I1031 17:05:45.016930 123788 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1031 17:05:45.016992 123788 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
I1031 17:05:45.017027 123788 kubeadm.go:317] [ERROR Port-2379]: Port 2379 is in use
I1031 17:05:45.017070 123788 kubeadm.go:317] [ERROR Port-2380]: Port 2380 is in use
I1031 17:05:45.017152 123788 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
I1031 17:05:45.017213 123788 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
W1031 17:05:45.017401 123788 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1031 17:05:44.903005 6621 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
I1031 17:05:45.017440 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1031 17:05:45.355913 123788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1031 17:05:45.365437 123788 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1031 17:05:45.365484 123788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1031 17:05:45.372598 123788 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1031 17:05:45.372638 123788 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1031 17:05:45.410978 123788 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
I1031 17:05:45.411059 123788 kubeadm.go:317] [preflight] Running pre-flight checks
I1031 17:05:45.437866 123788 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
I1031 17:05:45.437950 123788 kubeadm.go:317] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
I1031 17:05:45.438007 123788 kubeadm.go:317] [0;37mOS[0m: [0;32mLinux[0m
I1031 17:05:45.438080 123788 kubeadm.go:317] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1031 17:05:45.438188 123788 kubeadm.go:317] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1031 17:05:45.438265 123788 kubeadm.go:317] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1031 17:05:45.438327 123788 kubeadm.go:317] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1031 17:05:45.438408 123788 kubeadm.go:317] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1031 17:05:45.438474 123788 kubeadm.go:317] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1031 17:05:45.438542 123788 kubeadm.go:317] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1031 17:05:45.438609 123788 kubeadm.go:317] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1031 17:05:45.438681 123788 kubeadm.go:317] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1031 17:05:45.506713 123788 kubeadm.go:317] W1031 17:05:45.405956 6886 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I1031 17:05:45.506996 123788 kubeadm.go:317] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
I1031 17:05:45.507114 123788 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1031 17:05:45.507178 123788 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
I1031 17:05:45.507221 123788 kubeadm.go:317] [ERROR Port-2379]: Port 2379 is in use
I1031 17:05:45.507264 123788 kubeadm.go:317] [ERROR Port-2380]: Port 2380 is in use
I1031 17:05:45.507371 123788 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
I1031 17:05:45.507485 123788 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
I1031 17:05:45.507500 123788 kubeadm.go:398] StartCluster complete in 4m19.348589229s
I1031 17:05:45.507531 123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1031 17:05:45.507575 123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1031 17:05:45.530536 123788 cri.go:87] found id: ""
I1031 17:05:45.530565 123788 logs.go:274] 0 containers: []
W1031 17:05:45.530573 123788 logs.go:276] No container was found matching "kube-apiserver"
I1031 17:05:45.530579 123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1031 17:05:45.530626 123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1031 17:05:45.554752 123788 cri.go:87] found id: ""
I1031 17:05:45.554777 123788 logs.go:274] 0 containers: []
W1031 17:05:45.554783 123788 logs.go:276] No container was found matching "etcd"
I1031 17:05:45.554789 123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1031 17:05:45.554831 123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1031 17:05:45.578518 123788 cri.go:87] found id: ""
I1031 17:05:45.578542 123788 logs.go:274] 0 containers: []
W1031 17:05:45.578548 123788 logs.go:276] No container was found matching "coredns"
I1031 17:05:45.578554 123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1031 17:05:45.578603 123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1031 17:05:45.602333 123788 cri.go:87] found id: ""
I1031 17:05:45.602356 123788 logs.go:274] 0 containers: []
W1031 17:05:45.602363 123788 logs.go:276] No container was found matching "kube-scheduler"
I1031 17:05:45.602368 123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1031 17:05:45.602408 123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1031 17:05:45.625824 123788 cri.go:87] found id: ""
I1031 17:05:45.625847 123788 logs.go:274] 0 containers: []
W1031 17:05:45.625853 123788 logs.go:276] No container was found matching "kube-proxy"
I1031 17:05:45.625859 123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1031 17:05:45.625920 123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1031 17:05:45.649488 123788 cri.go:87] found id: ""
I1031 17:05:45.649513 123788 logs.go:274] 0 containers: []
W1031 17:05:45.649519 123788 logs.go:276] No container was found matching "kubernetes-dashboard"
I1031 17:05:45.649526 123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1031 17:05:45.649574 123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1031 17:05:45.672881 123788 cri.go:87] found id: ""
I1031 17:05:45.672907 123788 logs.go:274] 0 containers: []
W1031 17:05:45.672914 123788 logs.go:276] No container was found matching "storage-provisioner"
I1031 17:05:45.672920 123788 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1031 17:05:45.672965 123788 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1031 17:05:45.695705 123788 cri.go:87] found id: ""
I1031 17:05:45.695729 123788 logs.go:274] 0 containers: []
W1031 17:05:45.695736 123788 logs.go:276] No container was found matching "kube-controller-manager"
I1031 17:05:45.695744 123788 logs.go:123] Gathering logs for describe nodes ...
I1031 17:05:45.695756 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1031 17:05:45.827779 123788 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1031 17:05:45.827803 123788 logs.go:123] Gathering logs for containerd ...
I1031 17:05:45.827814 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1031 17:05:45.882431 123788 logs.go:123] Gathering logs for container status ...
I1031 17:05:45.882482 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1031 17:05:45.908973 123788 logs.go:123] Gathering logs for kubelet ...
I1031 17:05:45.909003 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1031 17:05:45.967611 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661461 4266 projected.go:192] Error preparing data for projected volume kube-api-access-8mn6l for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.968060 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661580 4266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5031015c-081e-49e2-8d46-09fd879a755c-kube-api-access-8mn6l podName:5031015c-081e-49e2-8d46-09fd879a755c nodeName:}" failed. No retries permitted until 2022-10-31 17:01:36.661550988 +0000 UTC m=+9.136031253 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8mn6l" (UniqueName: "kubernetes.io/projected/5031015c-081e-49e2-8d46-09fd879a755c-kube-api-access-8mn6l") pod "storage-provisioner" (UID: "5031015c-081e-49e2-8d46-09fd879a755c") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.968229 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: W1031 17:01:35.661699 4266 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.968390 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661728 4266 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.968572 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661819 4266 projected.go:192] Error preparing data for projected volume kube-api-access-d8dpf for pod kube-system/coredns-6d4b75cb6d-8wsrc: failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.968978 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661876 4266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e76d465-ae9a-4121-b7ed-1ef94dd20b7e-kube-api-access-d8dpf podName:8e76d465-ae9a-4121-b7ed-1ef94dd20b7e nodeName:}" failed. No retries permitted until 2022-10-31 17:01:36.661860993 +0000 UTC m=+9.136341257 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d8dpf" (UniqueName: "kubernetes.io/projected/8e76d465-ae9a-4121-b7ed-1ef94dd20b7e-kube-api-access-d8dpf") pod "coredns-6d4b75cb6d-8wsrc" (UID: "8e76d465-ae9a-4121-b7ed-1ef94dd20b7e") : failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.969129 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: W1031 17:01:35.662000 4266 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.969296 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662020 4266 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.969441 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: W1031 17:01:35.662225 4266 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.969602 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662242 4266 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.969778 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662330 4266 projected.go:192] Error preparing data for projected volume kube-api-access-5m45q for pod kube-system/kindnet-jljff: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.970177 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662376 4266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e66c31a9-8e36-4914-a086-32ba2b3dc004-kube-api-access-5m45q podName:e66c31a9-8e36-4914-a086-32ba2b3dc004 nodeName:}" failed. No retries permitted until 2022-10-31 17:01:36.662359447 +0000 UTC m=+9.136839704 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5m45q" (UniqueName: "kubernetes.io/projected/e66c31a9-8e36-4914-a086-32ba2b3dc004-kube-api-access-5m45q") pod "kindnet-jljff" (UID: "e66c31a9-8e36-4914-a086-32ba2b3dc004") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.970359 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662434 4266 projected.go:192] Error preparing data for projected volume kube-api-access-r84wv for pod kube-system/kube-proxy-54b5q: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
W1031 17:05:45.970760 123788 logs.go:138] Found kubelet problem: Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.662472 4266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ff95637-a367-440b-918f-495391f2f1cf-kube-api-access-r84wv podName:0ff95637-a367-440b-918f-495391f2f1cf nodeName:}" failed. No retries permitted until 2022-10-31 17:01:36.662457708 +0000 UTC m=+9.136937970 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-r84wv" (UniqueName: "kubernetes.io/projected/0ff95637-a367-440b-918f-495391f2f1cf-kube-api-access-r84wv") pod "kube-proxy-54b5q" (UID: "0ff95637-a367-440b-918f-495391f2f1cf") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
I1031 17:05:45.991682 123788 logs.go:123] Gathering logs for dmesg ...
I1031 17:05:45.991709 123788 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
W1031 17:05:46.006370 123788 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1031 17:05:45.405956 6886 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W1031 17:05:46.006406 123788 out.go:239] *
W1031 17:05:46.006520 123788 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1031 17:05:45.405956 6886 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W1031 17:05:46.006538 123788 out.go:239] *
W1031 17:05:46.007299 123788 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1031 17:05:46.010794 123788 out.go:177] X Problems detected in kubelet:
I1031 17:05:46.012324 123788 out.go:177] Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661461 4266 projected.go:192] Error preparing data for projected volume kube-api-access-8mn6l for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
I1031 17:05:46.013853 123788 out.go:177] Oct 31 17:01:35 test-preload-165950 kubelet[4266]: E1031 17:01:35.661580 4266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5031015c-081e-49e2-8d46-09fd879a755c-kube-api-access-8mn6l podName:5031015c-081e-49e2-8d46-09fd879a755c nodeName:}" failed. No retries permitted until 2022-10-31 17:01:36.661550988 +0000 UTC m=+9.136031253 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8mn6l" (UniqueName: "kubernetes.io/projected/5031015c-081e-49e2-8d46-09fd879a755c-kube-api-access-8mn6l") pod "storage-provisioner" (UID: "5031015c-081e-49e2-8d46-09fd879a755c") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-165950" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
I1031 17:05:46.015648 123788 out.go:177] Oct 31 17:01:35 test-preload-165950 kubelet[4266]: W1031 17:01:35.661699 4266 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-165950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-165950' and this object
I1031 17:05:46.017937 123788 out.go:177]
W1031 17:05:46.019427 123788 out.go:239] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1031 17:05:45.405956 6886 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W1031 17:05:46.019527 123788 out.go:239] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
W1031 17:05:46.019585 123788 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/5484
I1031 17:05:46.021064 123788 out.go:177]
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
*
* ==> containerd <==
* -- Logs begin at Mon 2022-10-31 16:59:52 UTC, end at Mon 2022-10-31 17:05:47 UTC. --
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.151462628Z" level=error msg="StopPodSandbox for \"\\\"Using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"\\\"Using\": not found"
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.167398259Z" level=info msg="StopPodSandbox for \"this\""
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.167445136Z" level=error msg="StopPodSandbox for \"this\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"this\": not found"
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.184330546Z" level=info msg="StopPodSandbox for \"endpoint\""
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.184383668Z" level=error msg="StopPodSandbox for \"endpoint\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint\": not found"
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.200382110Z" level=info msg="StopPodSandbox for \"is\""
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.200443041Z" level=error msg="StopPodSandbox for \"is\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"is\": not found"
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.216361944Z" level=info msg="StopPodSandbox for \"deprecated,\""
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.216425258Z" level=error msg="StopPodSandbox for \"deprecated,\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"deprecated,\": not found"
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.234264674Z" level=info msg="StopPodSandbox for \"please\""
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.234319247Z" level=error msg="StopPodSandbox for \"please\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"please\": not found"
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.250917604Z" level=info msg="StopPodSandbox for \"consider\""
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.250966395Z" level=error msg="StopPodSandbox for \"consider\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"consider\": not found"
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.267354061Z" level=info msg="StopPodSandbox for \"using\""
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.267406337Z" level=error msg="StopPodSandbox for \"using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"using\": not found"
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.284043412Z" level=info msg="StopPodSandbox for \"full\""
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.284110906Z" level=error msg="StopPodSandbox for \"full\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"full\": not found"
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.300567351Z" level=info msg="StopPodSandbox for \"URL\""
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.300622686Z" level=error msg="StopPodSandbox for \"URL\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL\": not found"
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.316986446Z" level=info msg="StopPodSandbox for \"format\\\"\""
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.317046155Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.333896652Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.333945909Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.351394870Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
Oct 31 17:05:45 test-preload-165950 containerd[3026]: time="2022-10-31T17:05:45.351451080Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
*
* ==> describe nodes <==
*
* ==> dmesg <==
* [ +0.008726] FS-Cache: N-key=[8] '81a00f0200000000'
[Oct31 16:44] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Oct31 16:55] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-690308a47517
[ +0.000008] ll header: 00000000: 02 42 c2 f8 41 7a 02 42 c0 a8 3a 02 08 00
[ +1.003479] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-690308a47517
[ +0.000006] ll header: 00000000: 02 42 c2 f8 41 7a 02 42 c0 a8 3a 02 08 00
[ +2.015780] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-690308a47517
[ +0.000006] ll header: 00000000: 02 42 c2 f8 41 7a 02 42 c0 a8 3a 02 08 00
[ +4.127615] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-690308a47517
[ +0.000034] ll header: 00000000: 02 42 c2 f8 41 7a 02 42 c0 a8 3a 02 08 00
[ +8.191156] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-690308a47517
[ +0.000047] ll header: 00000000: 02 42 c2 f8 41 7a 02 42 c0 a8 3a 02 08 00
[Oct31 16:58] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-690308a47517
[ +0.000006] ll header: 00000000: 02 42 c2 f8 41 7a 02 42 c0 a8 3a 02 08 00
[ +1.026086] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-690308a47517
[ +0.000006] ll header: 00000000: 02 42 c2 f8 41 7a 02 42 c0 a8 3a 02 08 00
[ +2.015755] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-690308a47517
[ +0.000005] ll header: 00000000: 02 42 c2 f8 41 7a 02 42 c0 a8 3a 02 08 00
[ +4.163565] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-690308a47517
[ +0.000006] ll header: 00000000: 02 42 c2 f8 41 7a 02 42 c0 a8 3a 02 08 00
[ +8.187227] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-690308a47517
[ +0.000006] ll header: 00000000: 02 42 c2 f8 41 7a 02 42 c0 a8 3a 02 08 00
[Oct31 17:01] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000732] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.012252] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
*
* ==> kernel <==
* 17:05:47 up 48 min, 0 users, load average: 0.31, 0.50, 0.66
Linux test-preload-165950 5.15.0-1021-gcp #28~20.04.1-Ubuntu SMP Mon Oct 17 11:37:54 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"
*
* ==> kubelet <==
* -- Logs begin at Mon 2022-10-31 16:59:52 UTC, end at Mon 2022-10-31 17:05:47 UTC. --
Oct 31 17:04:11 test-preload-165950 kubelet[4266]: E1031 17:04:11.871643 4266 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=etcd pod=etcd-test-preload-165950_kube-system(8a2a3eb7a75eb7f169392f7d77b36d78)\"" pod="kube-system/etcd-test-preload-165950" podUID=8a2a3eb7a75eb7f169392f7d77b36d78
Oct 31 17:04:24 test-preload-165950 kubelet[4266]: I1031 17:04:24.871748 4266 scope.go:110] "RemoveContainer" containerID="b3690aac287e29d3bf725c8f480fcc9f2dc84bd79eb1fca05505086a658aa453"
Oct 31 17:04:24 test-preload-165950 kubelet[4266]: E1031 17:04:24.872128 4266 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=etcd pod=etcd-test-preload-165950_kube-system(8a2a3eb7a75eb7f169392f7d77b36d78)\"" pod="kube-system/etcd-test-preload-165950" podUID=8a2a3eb7a75eb7f169392f7d77b36d78
Oct 31 17:04:36 test-preload-165950 kubelet[4266]: I1031 17:04:36.870972 4266 scope.go:110] "RemoveContainer" containerID="b3690aac287e29d3bf725c8f480fcc9f2dc84bd79eb1fca05505086a658aa453"
Oct 31 17:04:37 test-preload-165950 kubelet[4266]: I1031 17:04:37.350432 4266 scope.go:110] "RemoveContainer" containerID="b3690aac287e29d3bf725c8f480fcc9f2dc84bd79eb1fca05505086a658aa453"
Oct 31 17:04:37 test-preload-165950 kubelet[4266]: I1031 17:04:37.350765 4266 scope.go:110] "RemoveContainer" containerID="3e435f7522dea5e17f27b011477946a132e4ea7b5e4c44a37b08728aeaf01cee"
Oct 31 17:04:37 test-preload-165950 kubelet[4266]: E1031 17:04:37.351229 4266 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-165950_kube-system(8a2a3eb7a75eb7f169392f7d77b36d78)\"" pod="kube-system/etcd-test-preload-165950" podUID=8a2a3eb7a75eb7f169392f7d77b36d78
Oct 31 17:04:42 test-preload-165950 kubelet[4266]: I1031 17:04:42.948654 4266 scope.go:110] "RemoveContainer" containerID="3e435f7522dea5e17f27b011477946a132e4ea7b5e4c44a37b08728aeaf01cee"
Oct 31 17:04:42 test-preload-165950 kubelet[4266]: E1031 17:04:42.949066 4266 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-165950_kube-system(8a2a3eb7a75eb7f169392f7d77b36d78)\"" pod="kube-system/etcd-test-preload-165950" podUID=8a2a3eb7a75eb7f169392f7d77b36d78
Oct 31 17:04:46 test-preload-165950 kubelet[4266]: I1031 17:04:46.648249 4266 scope.go:110] "RemoveContainer" containerID="3e435f7522dea5e17f27b011477946a132e4ea7b5e4c44a37b08728aeaf01cee"
Oct 31 17:04:46 test-preload-165950 kubelet[4266]: E1031 17:04:46.648648 4266 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-165950_kube-system(8a2a3eb7a75eb7f169392f7d77b36d78)\"" pod="kube-system/etcd-test-preload-165950" podUID=8a2a3eb7a75eb7f169392f7d77b36d78
Oct 31 17:04:47 test-preload-165950 kubelet[4266]: I1031 17:04:47.371699 4266 scope.go:110] "RemoveContainer" containerID="3e435f7522dea5e17f27b011477946a132e4ea7b5e4c44a37b08728aeaf01cee"
Oct 31 17:04:47 test-preload-165950 kubelet[4266]: E1031 17:04:47.372027 4266 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-165950_kube-system(8a2a3eb7a75eb7f169392f7d77b36d78)\"" pod="kube-system/etcd-test-preload-165950" podUID=8a2a3eb7a75eb7f169392f7d77b36d78
Oct 31 17:05:00 test-preload-165950 kubelet[4266]: I1031 17:05:00.871106 4266 scope.go:110] "RemoveContainer" containerID="3e435f7522dea5e17f27b011477946a132e4ea7b5e4c44a37b08728aeaf01cee"
Oct 31 17:05:00 test-preload-165950 kubelet[4266]: E1031 17:05:00.871519 4266 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-165950_kube-system(8a2a3eb7a75eb7f169392f7d77b36d78)\"" pod="kube-system/etcd-test-preload-165950" podUID=8a2a3eb7a75eb7f169392f7d77b36d78
Oct 31 17:05:11 test-preload-165950 kubelet[4266]: I1031 17:05:11.871811 4266 scope.go:110] "RemoveContainer" containerID="3e435f7522dea5e17f27b011477946a132e4ea7b5e4c44a37b08728aeaf01cee"
Oct 31 17:05:11 test-preload-165950 kubelet[4266]: E1031 17:05:11.872226 4266 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-165950_kube-system(8a2a3eb7a75eb7f169392f7d77b36d78)\"" pod="kube-system/etcd-test-preload-165950" podUID=8a2a3eb7a75eb7f169392f7d77b36d78
Oct 31 17:05:26 test-preload-165950 kubelet[4266]: I1031 17:05:26.871149 4266 scope.go:110] "RemoveContainer" containerID="3e435f7522dea5e17f27b011477946a132e4ea7b5e4c44a37b08728aeaf01cee"
Oct 31 17:05:26 test-preload-165950 kubelet[4266]: E1031 17:05:26.871524 4266 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-165950_kube-system(8a2a3eb7a75eb7f169392f7d77b36d78)\"" pod="kube-system/etcd-test-preload-165950" podUID=8a2a3eb7a75eb7f169392f7d77b36d78
Oct 31 17:05:37 test-preload-165950 kubelet[4266]: I1031 17:05:37.871828 4266 scope.go:110] "RemoveContainer" containerID="3e435f7522dea5e17f27b011477946a132e4ea7b5e4c44a37b08728aeaf01cee"
Oct 31 17:05:37 test-preload-165950 kubelet[4266]: E1031 17:05:37.872202 4266 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-165950_kube-system(8a2a3eb7a75eb7f169392f7d77b36d78)\"" pod="kube-system/etcd-test-preload-165950" podUID=8a2a3eb7a75eb7f169392f7d77b36d78
Oct 31 17:05:43 test-preload-165950 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
Oct 31 17:05:43 test-preload-165950 kubelet[4266]: I1031 17:05:43.206951 4266 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
Oct 31 17:05:43 test-preload-165950 systemd[1]: kubelet.service: Succeeded.
Oct 31 17:05:43 test-preload-165950 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
-- /stdout --
** stderr **
E1031 17:05:47.096749 128573 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
! unable to fetch logs for: describe nodes
** /stderr **
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-165950 -n test-preload-165950
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-165950 -n test-preload-165950: exit status 2 (351.96652ms)
-- stdout --
Stopped
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "test-preload-165950" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-165950" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p test-preload-165950
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-165950: (2.115498231s)
--- FAIL: TestPreload (359.48s)