=== RUN TestPreload
preload_test.go:44: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-205820 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-205820 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.24.4: (53.962390202s)
preload_test.go:57: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-205820 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-205820 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.769666125s)
preload_test.go:67: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-205820 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --container-runtime=containerd --kubernetes-version=v1.24.6
E0108 21:00:15.379217 10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/ingress-addon-legacy-204344/client.crt: no such file or directory
E0108 21:00:56.112316 10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/addons-202819/client.crt: no such file or directory
E0108 21:02:57.125661 10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
E0108 21:04:20.168298 10372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/functional-204106/client.crt: no such file or directory
preload_test.go:67: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p test-preload-205820 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --container-runtime=containerd --kubernetes-version=v1.24.6: exit status 81 (5m5.081225325s)
-- stdout --
* [test-preload-205820] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=15565
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
* Using the docker driver based on existing profile
* Starting control plane node test-preload-205820 in cluster test-preload-205820
* Pulling base image ...
* Downloading Kubernetes v1.24.6 preload ...
* Updating the running docker "test-preload-205820" container ...
* Preparing Kubernetes v1.24.6 on containerd 1.6.10 ...
* Configuring CNI (Container Networking Interface) ...
X Problems detected in kubelet:
Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.937893 4359 projected.go:192] Error preparing data for projected volume kube-api-access-svv2t for pod kube-system/kube-proxy-wmrz2: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.937978 4359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35e9935b-759b-4c18-9d0b-2c0daaab9a1e-kube-api-access-svv2t podName:35e9935b-759b-4c18-9d0b-2c0daaab9a1e nodeName:}" failed. No retries permitted until 2023-01-08 21:00:11.937956077 +0000 UTC m=+9.792765068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-svv2t" (UniqueName: "kubernetes.io/projected/35e9935b-759b-4c18-9d0b-2c0daaab9a1e-kube-api-access-svv2t") pod "kube-proxy-wmrz2" (UID: "35e9935b-759b-4c18-9d0b-2c0daaab9a1e") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
Jan 08 21:00:10 test-preload-205820 kubelet[4359]: W0108 21:00:09.938038 4359 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
-- /stdout --
** stderr **
I0108 20:59:15.922988 124694 out.go:296] Setting OutFile to fd 1 ...
I0108 20:59:15.923190 124694 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:59:15.923199 124694 out.go:309] Setting ErrFile to fd 2...
I0108 20:59:15.923206 124694 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:59:15.923344 124694 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
I0108 20:59:15.923946 124694 out.go:303] Setting JSON to false
I0108 20:59:15.925106 124694 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2505,"bootTime":1673209051,"procs":425,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0108 20:59:15.925171 124694 start.go:135] virtualization: kvm guest
I0108 20:59:15.927955 124694 out.go:177] * [test-preload-205820] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
I0108 20:59:15.929374 124694 notify.go:220] Checking for updates...
I0108 20:59:15.929404 124694 out.go:177] - MINIKUBE_LOCATION=15565
I0108 20:59:15.931238 124694 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0108 20:59:15.932840 124694 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
I0108 20:59:15.935379 124694 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
I0108 20:59:15.937020 124694 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0108 20:59:15.939039 124694 config.go:180] Loaded profile config "test-preload-205820": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0108 20:59:15.941039 124694 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
I0108 20:59:15.942409 124694 driver.go:365] Setting default libvirt URI to qemu:///system
I0108 20:59:15.970300 124694 docker.go:137] docker version: linux-20.10.22
I0108 20:59:15.970401 124694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0108 20:59:16.062763 124694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2023-01-08 20:59:15.989379004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0108 20:59:16.062862 124694 docker.go:254] overlay module found
I0108 20:59:16.065073 124694 out.go:177] * Using the docker driver based on existing profile
I0108 20:59:16.066398 124694 start.go:294] selected driver: docker
I0108 20:59:16.066409 124694 start.go:838] validating driver "docker" against &{Name:test-preload-205820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-205820 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0108 20:59:16.066519 124694 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0108 20:59:16.067271 124694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0108 20:59:16.159790 124694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2023-01-08 20:59:16.087078013 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0108 20:59:16.160075 124694 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0108 20:59:16.160096 124694 cni.go:95] Creating CNI manager for ""
I0108 20:59:16.160103 124694 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0108 20:59:16.160116 124694 start_flags.go:317] config:
{Name:test-preload-205820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-205820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0108 20:59:16.162204 124694 out.go:177] * Starting control plane node test-preload-205820 in cluster test-preload-205820
I0108 20:59:16.165845 124694 cache.go:120] Beginning downloading kic base image for docker with containerd
I0108 20:59:16.167544 124694 out.go:177] * Pulling base image ...
I0108 20:59:16.169023 124694 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I0108 20:59:16.169127 124694 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
I0108 20:59:16.191569 124694 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
I0108 20:59:16.191596 124694 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
I0108 20:59:16.488573 124694 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
I0108 20:59:16.488598 124694 cache.go:57] Caching tarball of preloaded images
I0108 20:59:16.488917 124694 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I0108 20:59:16.491216 124694 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
I0108 20:59:16.492629 124694 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I0108 20:59:17.039968 124694 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
I0108 20:59:32.016227 124694 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I0108 20:59:32.016331 124694 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I0108 20:59:32.888834 124694 cache.go:60] Finished verifying existence of preloaded tar for v1.24.6 on containerd
I0108 20:59:32.888992 124694 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/config.json ...
I0108 20:59:32.889209 124694 cache.go:193] Successfully downloaded all kic artifacts
I0108 20:59:32.889259 124694 start.go:364] acquiring machines lock for test-preload-205820: {Name:mk27a98eef575d3995d47e9b2c3065d636302b25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0108 20:59:32.889363 124694 start.go:368] acquired machines lock for "test-preload-205820" in 75.02µs
I0108 20:59:32.889385 124694 start.go:96] Skipping create...Using existing machine configuration
I0108 20:59:32.889395 124694 fix.go:55] fixHost starting:
I0108 20:59:32.889636 124694 cli_runner.go:164] Run: docker container inspect test-preload-205820 --format={{.State.Status}}
I0108 20:59:32.913783 124694 fix.go:103] recreateIfNeeded on test-preload-205820: state=Running err=<nil>
W0108 20:59:32.913829 124694 fix.go:129] unexpected machine state, will restart: <nil>
I0108 20:59:32.917800 124694 out.go:177] * Updating the running docker "test-preload-205820" container ...
I0108 20:59:32.919462 124694 machine.go:88] provisioning docker machine ...
I0108 20:59:32.919513 124694 ubuntu.go:169] provisioning hostname "test-preload-205820"
I0108 20:59:32.919568 124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
I0108 20:59:32.942125 124694 main.go:134] libmachine: Using SSH client type: native
I0108 20:59:32.942374 124694 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil> [] 0s} 127.0.0.1 32892 <nil> <nil>}
I0108 20:59:32.942400 124694 main.go:134] libmachine: About to run SSH command:
sudo hostname test-preload-205820 && echo "test-preload-205820" | sudo tee /etc/hostname
I0108 20:59:33.063328 124694 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-205820
I0108 20:59:33.063392 124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
I0108 20:59:33.086668 124694 main.go:134] libmachine: Using SSH client type: native
I0108 20:59:33.086810 124694 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil> [] 0s} 127.0.0.1 32892 <nil> <nil>}
I0108 20:59:33.086827 124694 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\stest-preload-205820' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-205820/g' /etc/hosts;
else
echo '127.0.1.1 test-preload-205820' | sudo tee -a /etc/hosts;
fi
fi
I0108 20:59:33.203200 124694 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0108 20:59:33.203231 124694 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
I0108 20:59:33.203257 124694 ubuntu.go:177] setting up certificates
I0108 20:59:33.203273 124694 provision.go:83] configureAuth start
I0108 20:59:33.203326 124694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-205820
I0108 20:59:33.226487 124694 provision.go:138] copyHostCerts
I0108 20:59:33.226543 124694 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
I0108 20:59:33.226550 124694 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
I0108 20:59:33.226616 124694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
I0108 20:59:33.226699 124694 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
I0108 20:59:33.226708 124694 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
I0108 20:59:33.226734 124694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
I0108 20:59:33.226788 124694 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
I0108 20:59:33.226795 124694 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
I0108 20:59:33.226817 124694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
I0108 20:59:33.226869 124694 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.test-preload-205820 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-205820]
I0108 20:59:33.438802 124694 provision.go:172] copyRemoteCerts
I0108 20:59:33.438859 124694 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0108 20:59:33.438889 124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
I0108 20:59:33.462207 124694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/test-preload-205820/id_rsa Username:docker}
I0108 20:59:33.550321 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0108 20:59:33.566609 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0108 20:59:33.582624 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0108 20:59:33.598229 124694 provision.go:86] duration metric: configureAuth took 394.945613ms
I0108 20:59:33.598253 124694 ubuntu.go:193] setting minikube options for container-runtime
I0108 20:59:33.598410 124694 config.go:180] Loaded profile config "test-preload-205820": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
I0108 20:59:33.598423 124694 machine.go:91] provisioned docker machine in 678.92515ms
I0108 20:59:33.598432 124694 start.go:300] post-start starting for "test-preload-205820" (driver="docker")
I0108 20:59:33.598441 124694 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0108 20:59:33.598485 124694 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0108 20:59:33.598529 124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
I0108 20:59:33.620869 124694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/test-preload-205820/id_rsa Username:docker}
I0108 20:59:33.706833 124694 ssh_runner.go:195] Run: cat /etc/os-release
I0108 20:59:33.709432 124694 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0108 20:59:33.709452 124694 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0108 20:59:33.709460 124694 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0108 20:59:33.709466 124694 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0108 20:59:33.709473 124694 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
I0108 20:59:33.709515 124694 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
I0108 20:59:33.709584 124694 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
I0108 20:59:33.709657 124694 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0108 20:59:33.716065 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
I0108 20:59:33.732647 124694 start.go:303] post-start completed in 134.201143ms
I0108 20:59:33.732700 124694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0108 20:59:33.732750 124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
I0108 20:59:33.756085 124694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/test-preload-205820/id_rsa Username:docker}
I0108 20:59:33.835916 124694 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0108 20:59:33.839883 124694 fix.go:57] fixHost completed within 950.482339ms
I0108 20:59:33.839906 124694 start.go:83] releasing machines lock for "test-preload-205820", held for 950.52777ms
I0108 20:59:33.839991 124694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-205820
I0108 20:59:33.862646 124694 ssh_runner.go:195] Run: cat /version.json
I0108 20:59:33.862692 124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
I0108 20:59:33.862773 124694 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0108 20:59:33.862826 124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
I0108 20:59:33.886491 124694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/test-preload-205820/id_rsa Username:docker}
I0108 20:59:33.886912 124694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/test-preload-205820/id_rsa Username:docker}
I0108 20:59:33.984937 124694 ssh_runner.go:195] Run: systemctl --version
I0108 20:59:33.988836 124694 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0108 20:59:34.000114 124694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0108 20:59:34.008642 124694 docker.go:189] disabling docker service ...
I0108 20:59:34.008693 124694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0108 20:59:34.017530 124694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0108 20:59:34.025801 124694 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0108 20:59:34.122708 124694 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0108 20:59:34.217961 124694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0108 20:59:34.226765 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0108 20:59:34.238797 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
I0108 20:59:34.246194 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
I0108 20:59:34.253558 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
I0108 20:59:34.261040 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
I0108 20:59:34.268683 124694 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0108 20:59:34.274677 124694 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0108 20:59:34.280603 124694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 20:59:34.370755 124694 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0108 20:59:34.445671 124694 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
I0108 20:59:34.445735 124694 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0108 20:59:34.449843 124694 start.go:472] Will wait 60s for crictl version
I0108 20:59:34.449900 124694 ssh_runner.go:195] Run: sudo crictl version
I0108 20:59:34.476629 124694 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:
stderr:
time="2023-01-08T20:59:34Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
I0108 20:59:45.523600 124694 ssh_runner.go:195] Run: sudo crictl version
I0108 20:59:45.547086 124694 start.go:481] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.6.10
RuntimeApiVersion: v1alpha2
I0108 20:59:45.547154 124694 ssh_runner.go:195] Run: containerd --version
I0108 20:59:45.569590 124694 ssh_runner.go:195] Run: containerd --version
I0108 20:59:45.594001 124694 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.10 ...
I0108 20:59:45.595715 124694 cli_runner.go:164] Run: docker network inspect test-preload-205820 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0108 20:59:45.617246 124694 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I0108 20:59:45.620504 124694 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I0108 20:59:45.620559 124694 ssh_runner.go:195] Run: sudo crictl images --output json
I0108 20:59:45.642354 124694 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
I0108 20:59:45.642439 124694 ssh_runner.go:195] Run: which lz4
I0108 20:59:45.645255 124694 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0108 20:59:45.648306 124694 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I0108 20:59:45.648333 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
I0108 20:59:46.604476 124694 containerd.go:496] Took 0.959252 seconds to copy over tarball
I0108 20:59:46.604556 124694 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0108 20:59:49.388621 124694 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.784042744s)
I0108 20:59:49.388652 124694 containerd.go:503] Took 2.784153 seconds t extract the tarball
I0108 20:59:49.388661 124694 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0108 20:59:49.410719 124694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 20:59:49.511828 124694 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0108 20:59:49.595221 124694 ssh_runner.go:195] Run: sudo crictl images --output json
I0108 20:59:49.633196 124694 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.6 k8s.gcr.io/kube-controller-manager:v1.24.6 k8s.gcr.io/kube-scheduler:v1.24.6 k8s.gcr.io/kube-proxy:v1.24.6 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
I0108 20:59:49.633289 124694 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0108 20:59:49.633307 124694 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.6
I0108 20:59:49.633331 124694 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.6
I0108 20:59:49.633356 124694 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
I0108 20:59:49.633443 124694 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
I0108 20:59:49.633489 124694 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
I0108 20:59:49.633318 124694 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.6
I0108 20:59:49.633821 124694 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.6
I0108 20:59:49.634498 124694 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.6: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.6
I0108 20:59:49.634524 124694 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
I0108 20:59:49.634567 124694 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
I0108 20:59:49.634498 124694 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
I0108 20:59:49.634576 124694 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0108 20:59:49.634592 124694 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.6: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.6
I0108 20:59:49.634597 124694 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.6: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.6
I0108 20:59:49.634594 124694 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.6: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.6
I0108 20:59:50.047554 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
I0108 20:59:50.082929 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6"
I0108 20:59:50.099888 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
I0108 20:59:50.103323 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6"
I0108 20:59:50.117424 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
I0108 20:59:50.146323 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.6"
I0108 20:59:50.152220 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.6"
I0108 20:59:50.398896 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
I0108 20:59:50.629706 124694 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
I0108 20:59:50.629756 124694 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
I0108 20:59:50.629794 124694 ssh_runner.go:195] Run: which crictl
I0108 20:59:50.816705 124694 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.6" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.6" does not exist at hash "c6c20157a42337ecb7675be59e1dc34bc5a91288c7eeac1e30ec97767a9055eb" in container runtime
I0108 20:59:50.816826 124694 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.6
I0108 20:59:50.816908 124694 ssh_runner.go:195] Run: which crictl
I0108 20:59:50.834757 124694 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
I0108 20:59:50.834807 124694 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
I0108 20:59:50.834848 124694 ssh_runner.go:195] Run: which crictl
I0108 20:59:50.922638 124694 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.6" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.6" does not exist at hash "c786c777a4e1c21907e77042428837645fa382d3bd14925cf78f0d163d6d332e" in container runtime
I0108 20:59:50.922682 124694 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.6
I0108 20:59:50.922719 124694 ssh_runner.go:195] Run: which crictl
I0108 20:59:50.934129 124694 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
I0108 20:59:51.000970 124694 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
I0108 20:59:50.942667 124694 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.6" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.6" does not exist at hash "0bb39497ab33bb5f8aaff88ced53a5fcd360fcf5da647609619d4f5c8f1483d2" in container runtime
I0108 20:59:51.001020 124694 ssh_runner.go:195] Run: which crictl
I0108 20:59:51.001040 124694 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.6
I0108 20:59:51.001068 124694 ssh_runner.go:195] Run: which crictl
I0108 20:59:51.015918 124694 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.6" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.6" does not exist at hash "860f263331c9513ddab44d4d8a9a4a7304313b3aa0776decc1d7fc6acdd69ab0" in container runtime
I0108 20:59:51.015958 124694 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.6
I0108 20:59:51.016003 124694 ssh_runner.go:195] Run: which crictl
I0108 20:59:51.052154 124694 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
I0108 20:59:51.052200 124694 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I0108 20:59:51.052241 124694 ssh_runner.go:195] Run: which crictl
I0108 20:59:51.052242 124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
I0108 20:59:51.052305 124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6
I0108 20:59:51.052367 124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
I0108 20:59:51.052412 124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6
I0108 20:59:51.052474 124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
I0108 20:59:51.052542 124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6
I0108 20:59:52.140730 124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7: (1.088416372s)
I0108 20:59:52.140757 124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
I0108 20:59:52.140759 124694 ssh_runner.go:235] Completed: which crictl: (1.088481701s)
I0108 20:59:52.140801 124694 ssh_runner.go:235] Completed: which crictl: (1.124782782s)
I0108 20:59:52.140815 124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I0108 20:59:52.140840 124694 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
I0108 20:59:52.140885 124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6: (1.088559722s)
I0108 20:59:52.140843 124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6
I0108 20:59:52.140906 124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6
I0108 20:59:52.140996 124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6: (1.088560881s)
I0108 20:59:52.141009 124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0: (1.088624706s)
I0108 20:59:52.141014 124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6
I0108 20:59:52.141017 124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
I0108 20:59:52.141071 124694 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
I0108 20:59:52.141105 124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6: (1.088539031s)
I0108 20:59:52.141119 124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6
I0108 20:59:52.141068 124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6: (1.088569381s)
I0108 20:59:52.141133 124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
I0108 20:59:52.141193 124694 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
I0108 20:59:52.235063 124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
I0108 20:59:52.235158 124694 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
I0108 20:59:52.235188 124694 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
I0108 20:59:52.235208 124694 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.7
I0108 20:59:52.235211 124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6
I0108 20:59:52.235244 124694 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
I0108 20:59:52.235262 124694 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
I0108 20:59:52.235301 124694 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
I0108 20:59:52.348684 124694 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
I0108 20:59:52.348714 124694 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.8.6
I0108 20:59:52.348759 124694 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
I0108 20:59:52.348772 124694 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
I0108 20:59:53.355117 124694 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6: (1.006333066s)
I0108 20:59:53.355138 124694 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
I0108 20:59:53.355161 124694 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
I0108 20:59:53.355197 124694 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
I0108 20:59:58.744440 124694 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (5.389207325s)
I0108 20:59:58.744469 124694 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
I0108 20:59:58.744495 124694 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I0108 20:59:58.744532 124694 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
I0108 20:59:59.645452 124694 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I0108 20:59:59.645514 124694 cache_images.go:92] LoadImages completed in 10.012283055s
W0108 20:59:59.645650 124694 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6: no such file or directory
X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6: no such file or directory
I0108 20:59:59.645712 124694 ssh_runner.go:195] Run: sudo crictl info
I0108 20:59:59.719369 124694 cni.go:95] Creating CNI manager for ""
I0108 20:59:59.719404 124694 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0108 20:59:59.719417 124694 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0108 20:59:59.719431 124694 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-205820 NodeName:test-preload-205820 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
I0108 20:59:59.719633 124694 kubeadm.go:163] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "test-preload-205820"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0108 20:59:59.719739 124694 kubeadm.go:962] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-205820 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.6 ClusterName:test-preload-205820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0108 20:59:59.719791 124694 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
I0108 20:59:59.726680 124694 binaries.go:44] Found k8s binaries, skipping transfer
I0108 20:59:59.726736 124694 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0108 20:59:59.734052 124694 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (512 bytes)
I0108 20:59:59.749257 124694 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0108 20:59:59.764256 124694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
I0108 20:59:59.823242 124694 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I0108 20:59:59.826766 124694 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820 for IP: 192.168.67.2
I0108 20:59:59.826880 124694 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
I0108 20:59:59.826936 124694 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
I0108 20:59:59.827034 124694 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/client.key
I0108 20:59:59.827114 124694 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/apiserver.key.c7fa3a9e
I0108 20:59:59.827165 124694 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/proxy-client.key
I0108 20:59:59.827281 124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
W0108 20:59:59.827327 124694 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
I0108 20:59:59.827342 124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
I0108 20:59:59.827372 124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
I0108 20:59:59.827409 124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
I0108 20:59:59.827438 124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
I0108 20:59:59.827512 124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
I0108 20:59:59.828247 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0108 20:59:59.848605 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0108 20:59:59.867107 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0108 20:59:59.929393 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0108 20:59:59.947265 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0108 20:59:59.967659 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0108 20:59:59.986203 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0108 21:00:00.028839 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0108 21:00:00.054242 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0108 21:00:00.071784 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
I0108 21:00:00.087997 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
I0108 21:00:00.123064 124694 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0108 21:00:00.135539 124694 ssh_runner.go:195] Run: openssl version
I0108 21:00:00.140139 124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0108 21:00:00.147247 124694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0108 21:00:00.150148 124694 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 8 20:28 /usr/share/ca-certificates/minikubeCA.pem
I0108 21:00:00.150197 124694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0108 21:00:00.154652 124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0108 21:00:00.161321 124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
I0108 21:00:00.169127 124694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
I0108 21:00:00.171911 124694 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 8 20:41 /usr/share/ca-certificates/10372.pem
I0108 21:00:00.171967 124694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
I0108 21:00:00.176639 124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
I0108 21:00:00.182896 124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
I0108 21:00:00.189696 124694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
I0108 21:00:00.210855 124694 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 8 20:41 /usr/share/ca-certificates/103722.pem
I0108 21:00:00.210904 124694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
I0108 21:00:00.215636 124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
I0108 21:00:00.222153 124694 kubeadm.go:396] StartCluster: {Name:test-preload-205820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-205820 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0108 21:00:00.222257 124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0108 21:00:00.222298 124694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0108 21:00:00.245669 124694 cri.go:87] found id: "43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4"
I0108 21:00:00.245696 124694 cri.go:87] found id: "3852802493079c7473ee812611ecf809b363dd7bd001d0400d405c7b881a6881"
I0108 21:00:00.245706 124694 cri.go:87] found id: "0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659"
I0108 21:00:00.245715 124694 cri.go:87] found id: ""
I0108 21:00:00.245772 124694 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I0108 21:00:00.277898 124694 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"065e765dea3af949569c66775e9e531e06244c3c2704b71286fe12821e219459","pid":1612,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/065e765dea3af949569c66775e9e531e06244c3c2704b71286fe12821e219459","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/065e765dea3af949569c66775e9e531e06244c3c2704b71286fe12821e219459/rootfs","created":"2023-01-08T20:58:44.075786098Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri.sandbox-id":"5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c","io.kubernetes.cri.sandbox-name":"etcd-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"08229dac28fdf32f0c8a99b5ed19879661ab064e2af7b68008a29005ece6a817","pid":2685,"status":"running","bundle":"/ru
n/containerd/io.containerd.runtime.v2.task/k8s.io/08229dac28fdf32f0c8a99b5ed19879661ab064e2af7b68008a29005ece6a817","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08229dac28fdf32f0c8a99b5ed19879661ab064e2af7b68008a29005ece6a817/rootfs","created":"2023-01-08T20:59:11.277618302Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659","pid":3743,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659","rootfs":"/run/containerd/io.containerd.runtime.v2.task
/k8s.io/0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659/rootfs","created":"2023-01-08T20:59:53.536252787Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-48vmf","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb","pid":3679,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb/rootfs","created":"2023-01-08T20:59:52.952963041Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io
.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_bdbd16cd-b53b-4309-ad17-7915a6d7b693","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3","pid":2625,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3/rootfs","created":"2023-01-08T20:59:11.178050048Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubern
etes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-48vmf_d43c5f88-44b8-4ab6-bc5b-f2883eda56e2","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-48vmf","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae","pid":2211,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae/rootfs","created":"2023-01-08T20:59:03.662818408Z","annotations":{"io.kubernetes.cri.container-type":"sandbo
x","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-wmrz2_35e9935b-759b-4c18-9d0b-2c0daaab9a1e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-wmrz2","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2860f073544995b5c9f9bde40d0f7806528938816bcd39f0e73a07c55ea56d4c","pid":1658,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2860f073544995b5c9f9bde40d0f7806528938816bcd39f0e73a07c55ea56d4c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2860f073544995b5c9f9bde40d0f7806528938816bcd39f0e73a07c55ea56d4c/rootfs","created":"2023-01-08T20:58:44.120902562Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io
.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-scheduler:v1.24.4","io.kubernetes.cri.sandbox-id":"c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2d6cd5a5cf0dff47d97cfde8133a2e4146d1d9d16da6d9b609e7cfaec2870071","pid":2488,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d6cd5a5cf0dff47d97cfde8133a2e4146d1d9d16da6d9b609e7cfaec2870071","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d6cd5a5cf0dff47d97cfde8133a2e4146d1d9d16da6d9b609e7cfaec2870071/rootfs","created":"2023-01-08T20:59:07.90993923Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20221004-44d545d1","io.kubernetes.cri.sandbox-id":"414cc3f9f286440d220
4fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a","io.kubernetes.cri.sandbox-name":"kindnet-mtvg5","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"40db066158605ac15ff3157b7b668db3f5d83f46b55f96e4a63fd5f2f68fe4bd","pid":1657,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/40db066158605ac15ff3157b7b668db3f5d83f46b55f96e4a63fd5f2f68fe4bd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/40db066158605ac15ff3157b7b668db3f5d83f46b55f96e4a63fd5f2f68fe4bd/rootfs","created":"2023-01-08T20:58:44.121187645Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-apiserver:v1.24.4","io.kubernetes.cri.sandbox-id":"a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"oci
Version":"1.0.2-dev","id":"414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a","pid":2210,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a/rootfs","created":"2023-01-08T20:59:03.715705604Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-mtvg5_1257f157-44a7-41fe-9d98-48b85ce53a40","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-mtvg5","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersio
n":"1.0.2-dev","id":"41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265","pid":3646,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265/rootfs","created":"2023-01-08T20:59:52.914259586Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-mtvg5_1257f157-44a7-41fe-9d98-48b85ce53a40","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-mtvg5","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.
0.2-dev","id":"43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4","pid":4073,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4/rootfs","created":"2023-01-08T20:59:59.961439321Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-controller-manager:v1.24.4","io.kubernetes.cri.sandbox-id":"73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c","pid":1522,"status":"running","bundle":"/run/containerd/io.con
tainerd.runtime.v2.task/k8s.io/5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c/rootfs","created":"2023-01-08T20:58:43.912562088Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-205820_0106aa4904eaf95a3dcc4972da83cce0","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"67d1de92c3988f3adff4d3de84bdc8c6f1c660706e7dff753e1f453c0993d5d6","pid":1611,"status":"running","bundle":"/run/containerd/io.co
ntainerd.runtime.v2.task/k8s.io/67d1de92c3988f3adff4d3de84bdc8c6f1c660706e7dff753e1f453c0993d5d6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/67d1de92c3988f3adff4d3de84bdc8c6f1c660706e7dff753e1f453c0993d5d6/rootfs","created":"2023-01-08T20:58:44.078720095Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-controller-manager:v1.24.4","io.kubernetes.cri.sandbox-id":"d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111","pid":3579,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111","rootfs":"/run/containerd/io.containerd.runti
me.v2.task/k8s.io/7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111/rootfs","created":"2023-01-08T20:59:52.820275074Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-205820_0d00ad4c93ccd906fbcaecbff49fd727","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855","pid":3470,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855","rootfs":"/run/container
d/io.containerd.runtime.v2.task/k8s.io/73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855/rootfs","created":"2023-01-08T20:59:52.622948749Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-205820_3137f4b6a8ebd97ba2fc8851160ac0b1","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea74546fe19d4e0496d","pid":3442,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea7454
6fe19d4e0496d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea74546fe19d4e0496d/rootfs","created":"2023-01-08T20:59:52.55532244Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea74546fe19d4e0496d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-205820_044b6365f10644e1fab9f12495485e76","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3","pid":1520,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2b1f431ca407ff679e24ad09153258
04e13945554f39501c29ac7dcf5ab81f3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3/rootfs","created":"2023-01-08T20:58:43.914531641Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-205820_044b6365f10644e1fab9f12495485e76","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c5391d45b9b2afdd922ee8a2dd0be2e95703799a2791c0fd2d86eca5b63c6462","pid":2246,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5391d45b9
b2afdd922ee8a2dd0be2e95703799a2791c0fd2d86eca5b63c6462","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5391d45b9b2afdd922ee8a2dd0be2e95703799a2791c0fd2d86eca5b63c6462/rootfs","created":"2023-01-08T20:59:03.781592888Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-proxy:v1.24.4","io.kubernetes.cri.sandbox-id":"149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae","io.kubernetes.cri.sandbox-name":"kube-proxy-wmrz2","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65","pid":1521,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65/rootfs","cre
ated":"2023-01-08T20:58:43.918296824Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-205820_0d00ad4c93ccd906fbcaecbff49fd727","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf","pid":2624,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d4403
53ac76bf/rootfs","created":"2023-01-08T20:59:11.177965157Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_bdbd16cd-b53b-4309-ad17-7915a6d7b693","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c7b49e0771503321fcb6afb14a89ae5ab349eac5aefa3e765ae4aafd15863848","pid":2686,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7b49e0771503321fcb6afb14a89ae5ab349eac5aefa3e765ae4aafd15863848","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7b49e0771503321fcb6afb14a89ae5ab349eac5aefa3e765ae4aafd1586384
8/rootfs","created":"2023-01-08T20:59:11.277494639Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-48vmf","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f","pid":1523,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f/rootfs","created":"2023-01-08T20:58:43.918339088Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox
-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-205820_3137f4b6a8ebd97ba2fc8851160ac0b1","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67","pid":3427,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67/rootfs","created":"2023-01-08T20:59:52.545724953Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-peri
od":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-48vmf_d43c5f88-44b8-4ab6-bc5b-f2883eda56e2","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-48vmf","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0","pid":3658,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0/rootfs","created":"2023-01-08T20:59:52.920247257Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.san
dbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-wmrz2_35e9935b-759b-4c18-9d0b-2c0daaab9a1e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-wmrz2","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb","pid":3534,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb/rootfs","created":"2023-01-08T20:59:52.73552926Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-perio
d":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-205820_0106aa4904eaf95a3dcc4972da83cce0","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"}]
I0108 21:00:00.278314 124694 cri.go:124] list returned 26 containers
I0108 21:00:00.278332 124694 cri.go:127] container: {ID:065e765dea3af949569c66775e9e531e06244c3c2704b71286fe12821e219459 Status:running}
I0108 21:00:00.278347 124694 cri.go:129] skipping 065e765dea3af949569c66775e9e531e06244c3c2704b71286fe12821e219459 - not in ps
I0108 21:00:00.278355 124694 cri.go:127] container: {ID:08229dac28fdf32f0c8a99b5ed19879661ab064e2af7b68008a29005ece6a817 Status:running}
I0108 21:00:00.278368 124694 cri.go:129] skipping 08229dac28fdf32f0c8a99b5ed19879661ab064e2af7b68008a29005ece6a817 - not in ps
I0108 21:00:00.278384 124694 cri.go:127] container: {ID:0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659 Status:running}
I0108 21:00:00.278397 124694 cri.go:133] skipping {0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659 running}: state = "running", want "paused"
I0108 21:00:00.278410 124694 cri.go:127] container: {ID:10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb Status:running}
I0108 21:00:00.278422 124694 cri.go:129] skipping 10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb - not in ps
I0108 21:00:00.278433 124694 cri.go:127] container: {ID:12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3 Status:running}
I0108 21:00:00.278442 124694 cri.go:129] skipping 12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3 - not in ps
I0108 21:00:00.278451 124694 cri.go:127] container: {ID:149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae Status:running}
I0108 21:00:00.278461 124694 cri.go:129] skipping 149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae - not in ps
I0108 21:00:00.278471 124694 cri.go:127] container: {ID:2860f073544995b5c9f9bde40d0f7806528938816bcd39f0e73a07c55ea56d4c Status:running}
I0108 21:00:00.278482 124694 cri.go:129] skipping 2860f073544995b5c9f9bde40d0f7806528938816bcd39f0e73a07c55ea56d4c - not in ps
I0108 21:00:00.278493 124694 cri.go:127] container: {ID:2d6cd5a5cf0dff47d97cfde8133a2e4146d1d9d16da6d9b609e7cfaec2870071 Status:running}
I0108 21:00:00.278502 124694 cri.go:129] skipping 2d6cd5a5cf0dff47d97cfde8133a2e4146d1d9d16da6d9b609e7cfaec2870071 - not in ps
I0108 21:00:00.278512 124694 cri.go:127] container: {ID:40db066158605ac15ff3157b7b668db3f5d83f46b55f96e4a63fd5f2f68fe4bd Status:running}
I0108 21:00:00.278525 124694 cri.go:129] skipping 40db066158605ac15ff3157b7b668db3f5d83f46b55f96e4a63fd5f2f68fe4bd - not in ps
I0108 21:00:00.278536 124694 cri.go:127] container: {ID:414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a Status:running}
I0108 21:00:00.278547 124694 cri.go:129] skipping 414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a - not in ps
I0108 21:00:00.278554 124694 cri.go:127] container: {ID:41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265 Status:running}
I0108 21:00:00.278566 124694 cri.go:129] skipping 41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265 - not in ps
I0108 21:00:00.278576 124694 cri.go:127] container: {ID:43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4 Status:running}
I0108 21:00:00.278588 124694 cri.go:133] skipping {43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4 running}: state = "running", want "paused"
I0108 21:00:00.278603 124694 cri.go:127] container: {ID:5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c Status:running}
I0108 21:00:00.278615 124694 cri.go:129] skipping 5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c - not in ps
I0108 21:00:00.278633 124694 cri.go:127] container: {ID:67d1de92c3988f3adff4d3de84bdc8c6f1c660706e7dff753e1f453c0993d5d6 Status:running}
I0108 21:00:00.278644 124694 cri.go:129] skipping 67d1de92c3988f3adff4d3de84bdc8c6f1c660706e7dff753e1f453c0993d5d6 - not in ps
I0108 21:00:00.278651 124694 cri.go:127] container: {ID:7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111 Status:running}
I0108 21:00:00.278660 124694 cri.go:129] skipping 7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111 - not in ps
I0108 21:00:00.278667 124694 cri.go:127] container: {ID:73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855 Status:running}
I0108 21:00:00.278679 124694 cri.go:129] skipping 73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855 - not in ps
I0108 21:00:00.278687 124694 cri.go:127] container: {ID:833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea74546fe19d4e0496d Status:running}
I0108 21:00:00.278699 124694 cri.go:129] skipping 833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea74546fe19d4e0496d - not in ps
I0108 21:00:00.278707 124694 cri.go:127] container: {ID:a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3 Status:running}
I0108 21:00:00.278719 124694 cri.go:129] skipping a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3 - not in ps
I0108 21:00:00.278729 124694 cri.go:127] container: {ID:c5391d45b9b2afdd922ee8a2dd0be2e95703799a2791c0fd2d86eca5b63c6462 Status:running}
I0108 21:00:00.278737 124694 cri.go:129] skipping c5391d45b9b2afdd922ee8a2dd0be2e95703799a2791c0fd2d86eca5b63c6462 - not in ps
I0108 21:00:00.278744 124694 cri.go:127] container: {ID:c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65 Status:running}
I0108 21:00:00.278756 124694 cri.go:129] skipping c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65 - not in ps
I0108 21:00:00.278767 124694 cri.go:127] container: {ID:c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf Status:running}
I0108 21:00:00.278780 124694 cri.go:129] skipping c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf - not in ps
I0108 21:00:00.278790 124694 cri.go:127] container: {ID:c7b49e0771503321fcb6afb14a89ae5ab349eac5aefa3e765ae4aafd15863848 Status:running}
I0108 21:00:00.278804 124694 cri.go:129] skipping c7b49e0771503321fcb6afb14a89ae5ab349eac5aefa3e765ae4aafd15863848 - not in ps
I0108 21:00:00.278814 124694 cri.go:127] container: {ID:d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f Status:running}
I0108 21:00:00.278822 124694 cri.go:129] skipping d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f - not in ps
I0108 21:00:00.278830 124694 cri.go:127] container: {ID:ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67 Status:running}
I0108 21:00:00.278842 124694 cri.go:129] skipping ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67 - not in ps
I0108 21:00:00.278852 124694 cri.go:127] container: {ID:ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0 Status:running}
I0108 21:00:00.278862 124694 cri.go:129] skipping ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0 - not in ps
I0108 21:00:00.278872 124694 cri.go:127] container: {ID:ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb Status:running}
I0108 21:00:00.278883 124694 cri.go:129] skipping ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb - not in ps
I0108 21:00:00.278925 124694 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0108 21:00:00.286080 124694 kubeadm.go:411] found existing configuration files, will attempt cluster restart
I0108 21:00:00.286102 124694 kubeadm.go:627] restartCluster start
I0108 21:00:00.286141 124694 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0108 21:00:00.292256 124694 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0108 21:00:00.292769 124694 kubeconfig.go:92] found "test-preload-205820" server: "https://192.168.67.2:8443"
I0108 21:00:00.293379 124694 kapi.go:59] client config for test-preload-205820: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0108 21:00:00.293896 124694 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0108 21:00:00.302755 124694 kubeadm.go:594] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml 2023-01-08 20:58:39.826861611 +0000
+++ /var/tmp/minikube/kubeadm.yaml.new 2023-01-08 20:59:59.816713998 +0000
@@ -38,7 +38,7 @@
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
-kubernetesVersion: v1.24.4
+kubernetesVersion: v1.24.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
-- /stdout --
I0108 21:00:00.302770 124694 kubeadm.go:1114] stopping kube-system containers ...
I0108 21:00:00.302789 124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0108 21:00:00.302824 124694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0108 21:00:00.329264 124694 cri.go:87] found id: "43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4"
I0108 21:00:00.329296 124694 cri.go:87] found id: "3852802493079c7473ee812611ecf809b363dd7bd001d0400d405c7b881a6881"
I0108 21:00:00.329308 124694 cri.go:87] found id: "0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659"
I0108 21:00:00.329317 124694 cri.go:87] found id: ""
I0108 21:00:00.329323 124694 cri.go:232] Stopping containers: [43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4 3852802493079c7473ee812611ecf809b363dd7bd001d0400d405c7b881a6881 0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659]
I0108 21:00:00.329366 124694 ssh_runner.go:195] Run: which crictl
I0108 21:00:00.332622 124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4 3852802493079c7473ee812611ecf809b363dd7bd001d0400d405c7b881a6881 0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659
I0108 21:00:00.624345 124694 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0108 21:00:00.699226 124694 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0108 21:00:00.706356 124694 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5639 Jan 8 20:58 /etc/kubernetes/admin.conf
-rw------- 1 root root 5652 Jan 8 20:58 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2015 Jan 8 20:58 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5604 Jan 8 20:58 /etc/kubernetes/scheduler.conf
I0108 21:00:00.706408 124694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0108 21:00:00.713037 124694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0108 21:00:00.719542 124694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0108 21:00:00.725937 124694 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0108 21:00:00.725991 124694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0108 21:00:00.731944 124694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0108 21:00:00.738208 124694 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0108 21:00:00.738259 124694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0108 21:00:00.744328 124694 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0108 21:00:00.750786 124694 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0108 21:00:00.750804 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0108 21:00:00.994143 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0108 21:00:01.861835 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0108 21:00:02.144772 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0108 21:00:02.193739 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0108 21:00:02.312980 124694 api_server.go:51] waiting for apiserver process to appear ...
I0108 21:00:02.313046 124694 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 21:00:02.324151 124694 api_server.go:71] duration metric: took 11.177196ms to wait for apiserver process to appear ...
I0108 21:00:02.324188 124694 api_server.go:87] waiting for apiserver healthz status ...
I0108 21:00:02.324232 124694 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0108 21:00:02.329308 124694 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
ok
I0108 21:00:02.336848 124694 api_server.go:140] control plane version: v1.24.4
W0108 21:00:02.336885 124694 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I0108 21:00:02.838027 124694 api_server.go:140] control plane version: v1.24.4
W0108 21:00:02.838054 124694 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I0108 21:00:03.338861 124694 api_server.go:140] control plane version: v1.24.4
W0108 21:00:03.338897 124694 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I0108 21:00:03.837783 124694 api_server.go:140] control plane version: v1.24.4
W0108 21:00:03.837811 124694 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I0108 21:00:04.338312 124694 api_server.go:140] control plane version: v1.24.4
W0108 21:00:04.338339 124694 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
W0108 21:00:04.837852 124694 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W0108 21:00:05.337803 124694 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W0108 21:00:05.837782 124694 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W0108 21:00:06.338026 124694 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
I0108 21:00:09.935143 124694 api_server.go:140] control plane version: v1.24.6
I0108 21:00:09.935175 124694 api_server.go:130] duration metric: took 7.610979606s to wait for apiserver health ...
I0108 21:00:09.935185 124694 cni.go:95] Creating CNI manager for ""
I0108 21:00:09.935193 124694 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0108 21:00:09.937716 124694 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0108 21:00:09.939281 124694 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0108 21:00:10.021100 124694 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.6/kubectl ...
I0108 21:00:10.021132 124694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0108 21:00:10.133101 124694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0108 21:00:11.267907 124694 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.134775053s)
I0108 21:00:11.267939 124694 system_pods.go:43] waiting for kube-system pods to appear ...
I0108 21:00:11.274594 124694 system_pods.go:59] 6 kube-system pods found
I0108 21:00:11.274625 124694 system_pods.go:61] "coredns-6d4b75cb6d-48vmf" [d43c5f88-44b8-4ab6-bc5b-f2883eda56e2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0108 21:00:11.274637 124694 system_pods.go:61] "etcd-test-preload-205820" [f39e5236-110c-4587-8d2c-7da2d7802adc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0108 21:00:11.274644 124694 system_pods.go:61] "kindnet-mtvg5" [1257f157-44a7-41fe-9d98-48b85ce53a40] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0108 21:00:11.274653 124694 system_pods.go:61] "kube-proxy-wmrz2" [35e9935b-759b-4c18-9d0b-2c0daaab9a1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0108 21:00:11.274659 124694 system_pods.go:61] "kube-scheduler-test-preload-205820" [e0e1f824-50ae-4a61-b2c6-d7d2bb6f2edc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0108 21:00:11.274664 124694 system_pods.go:61] "storage-provisioner" [bdbd16cd-b53b-4309-ad17-7915a6d7b693] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0108 21:00:11.274669 124694 system_pods.go:74] duration metric: took 6.724913ms to wait for pod list to return data ...
I0108 21:00:11.274676 124694 node_conditions.go:102] verifying NodePressure condition ...
I0108 21:00:11.276970 124694 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0108 21:00:11.276995 124694 node_conditions.go:123] node cpu capacity is 8
I0108 21:00:11.277010 124694 node_conditions.go:105] duration metric: took 2.328282ms to run NodePressure ...
I0108 21:00:11.277035 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0108 21:00:11.436079 124694 kubeadm.go:763] waiting for restarted kubelet to initialise ...
I0108 21:00:11.439304 124694 kubeadm.go:778] kubelet initialised
I0108 21:00:11.439324 124694 kubeadm.go:779] duration metric: took 3.225451ms waiting for restarted kubelet to initialise ...
I0108 21:00:11.439330 124694 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0108 21:00:11.443291 124694 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-48vmf" in "kube-system" namespace to be "Ready" ...
I0108 21:00:13.452847 124694 pod_ready.go:102] pod "coredns-6d4b75cb6d-48vmf" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:15.453183 124694 pod_ready.go:102] pod "coredns-6d4b75cb6d-48vmf" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:17.953269 124694 pod_ready.go:92] pod "coredns-6d4b75cb6d-48vmf" in "kube-system" namespace has status "Ready":"True"
I0108 21:00:17.953294 124694 pod_ready.go:81] duration metric: took 6.509981854s waiting for pod "coredns-6d4b75cb6d-48vmf" in "kube-system" namespace to be "Ready" ...
I0108 21:00:17.953304 124694 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-205820" in "kube-system" namespace to be "Ready" ...
I0108 21:00:19.962548 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:21.963216 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:23.963314 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:26.462627 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:28.462965 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:30.962959 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:32.963068 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:35.463009 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:37.962454 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:40.462881 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:42.963385 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:45.462486 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:47.962468 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:49.962746 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:51.963178 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:54.463217 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:56.963323 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:59.463092 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:01.963156 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:04.463567 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:06.464930 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:08.962935 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:11.463300 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:13.962969 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:16.463128 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:18.963199 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:20.963826 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:23.462743 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:25.463158 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:27.962188 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:29.963079 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:32.464217 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:34.962854 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:37.462215 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:39.462584 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:41.462699 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:43.462915 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:45.963307 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:48.463544 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:50.963045 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:52.963170 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:55.462700 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:57.463256 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:59.962706 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:01.962779 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:03.963173 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:06.463371 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:08.463437 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:10.465071 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:12.963206 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:15.462589 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:17.462845 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:19.962938 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:21.963353 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:24.463222 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:26.463680 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:28.962594 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:30.962697 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:32.963185 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:35.462477 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:37.463216 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:39.962881 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:42.462539 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:44.462864 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:46.462968 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:48.962577 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:50.962760 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:53.464211 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:55.963075 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:58.463348 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:00.962702 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:02.962942 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:04.963134 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:07.462937 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:09.962917 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:12.462863 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:14.962823 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:17.462424 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:19.462845 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:21.962750 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:24.462946 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:26.463390 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:28.962923 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:30.963325 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:33.462969 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:35.963094 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:38.462979 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:40.963186 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:43.462328 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:45.462741 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:47.962483 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:49.963279 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:51.963334 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:54.462958 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:56.963433 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:58.963562 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:04:00.963753 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:04:03.463621 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:04:05.962769 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:04:07.962891 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:04:09.963338 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:04:12.462686 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:04:14.463369 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:04:16.963058 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:04:17.957364 124694 pod_ready.go:81] duration metric: took 4m0.004045666s waiting for pod "etcd-test-preload-205820" in "kube-system" namespace to be "Ready" ...
E0108 21:04:17.957391 124694 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "etcd-test-preload-205820" in "kube-system" namespace to be "Ready" (will not retry!)
I0108 21:04:17.957419 124694 pod_ready.go:38] duration metric: took 4m6.518080998s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0108 21:04:17.957445 124694 kubeadm.go:631] restartCluster took 4m17.671337074s
W0108 21:04:17.957589 124694 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
I0108 21:04:17.957621 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0108 21:04:19.626459 124694 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.668819722s)
I0108 21:04:19.626516 124694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0108 21:04:19.635943 124694 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0108 21:04:19.642808 124694 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0108 21:04:19.642862 124694 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0108 21:04:19.649319 124694 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0108 21:04:19.649357 124694 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0108 21:04:19.686509 124694 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
I0108 21:04:19.686580 124694 kubeadm.go:317] [preflight] Running pre-flight checks
I0108 21:04:19.714334 124694 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
I0108 21:04:19.714410 124694 kubeadm.go:317] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1025-gcp[0m
I0108 21:04:19.714442 124694 kubeadm.go:317] [0;37mOS[0m: [0;32mLinux[0m
I0108 21:04:19.714480 124694 kubeadm.go:317] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0108 21:04:19.714520 124694 kubeadm.go:317] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0108 21:04:19.714613 124694 kubeadm.go:317] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0108 21:04:19.714688 124694 kubeadm.go:317] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0108 21:04:19.714729 124694 kubeadm.go:317] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0108 21:04:19.714777 124694 kubeadm.go:317] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0108 21:04:19.714821 124694 kubeadm.go:317] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0108 21:04:19.714864 124694 kubeadm.go:317] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0108 21:04:19.714905 124694 kubeadm.go:317] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0108 21:04:19.795815 124694 kubeadm.go:317] W0108 21:04:19.681686 6711 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I0108 21:04:19.796049 124694 kubeadm.go:317] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
I0108 21:04:19.796184 124694 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0108 21:04:19.796272 124694 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
I0108 21:04:19.796332 124694 kubeadm.go:317] [ERROR Port-2379]: Port 2379 is in use
I0108 21:04:19.796381 124694 kubeadm.go:317] [ERROR Port-2380]: Port 2380 is in use
I0108 21:04:19.796489 124694 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
I0108 21:04:19.796595 124694 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
W0108 21:04:19.796778 124694 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1025-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W0108 21:04:19.681686 6711 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1025-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W0108 21:04:19.681686 6711 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
I0108 21:04:19.796820 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0108 21:04:20.125925 124694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0108 21:04:20.135276 124694 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0108 21:04:20.135332 124694 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0108 21:04:20.142002 124694 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0108 21:04:20.142045 124694 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0108 21:04:20.178099 124694 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
I0108 21:04:20.178220 124694 kubeadm.go:317] [preflight] Running pre-flight checks
I0108 21:04:20.203461 124694 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
I0108 21:04:20.203557 124694 kubeadm.go:317] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1025-gcp[0m
I0108 21:04:20.203613 124694 kubeadm.go:317] [0;37mOS[0m: [0;32mLinux[0m
I0108 21:04:20.203661 124694 kubeadm.go:317] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0108 21:04:20.203724 124694 kubeadm.go:317] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0108 21:04:20.203781 124694 kubeadm.go:317] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0108 21:04:20.203869 124694 kubeadm.go:317] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0108 21:04:20.203928 124694 kubeadm.go:317] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0108 21:04:20.203973 124694 kubeadm.go:317] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0108 21:04:20.204056 124694 kubeadm.go:317] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0108 21:04:20.204123 124694 kubeadm.go:317] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0108 21:04:20.204198 124694 kubeadm.go:317] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0108 21:04:20.268181 124694 kubeadm.go:317] W0108 21:04:20.173147 6979 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I0108 21:04:20.268365 124694 kubeadm.go:317] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
I0108 21:04:20.268449 124694 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0108 21:04:20.268528 124694 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
I0108 21:04:20.268566 124694 kubeadm.go:317] [ERROR Port-2379]: Port 2379 is in use
I0108 21:04:20.268640 124694 kubeadm.go:317] [ERROR Port-2380]: Port 2380 is in use
I0108 21:04:20.268767 124694 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
I0108 21:04:20.268860 124694 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
I0108 21:04:20.268932 124694 kubeadm.go:398] StartCluster complete in 4m20.046785929s
I0108 21:04:20.268974 124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0108 21:04:20.269027 124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0108 21:04:20.291757 124694 cri.go:87] found id: ""
I0108 21:04:20.291784 124694 logs.go:274] 0 containers: []
W0108 21:04:20.291794 124694 logs.go:276] No container was found matching "kube-apiserver"
I0108 21:04:20.291800 124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0108 21:04:20.291843 124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0108 21:04:20.314092 124694 cri.go:87] found id: ""
I0108 21:04:20.314115 124694 logs.go:274] 0 containers: []
W0108 21:04:20.314121 124694 logs.go:276] No container was found matching "etcd"
I0108 21:04:20.314127 124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0108 21:04:20.314165 124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0108 21:04:20.336438 124694 cri.go:87] found id: ""
I0108 21:04:20.336466 124694 logs.go:274] 0 containers: []
W0108 21:04:20.336476 124694 logs.go:276] No container was found matching "coredns"
I0108 21:04:20.336485 124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0108 21:04:20.336531 124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0108 21:04:20.360386 124694 cri.go:87] found id: ""
I0108 21:04:20.360419 124694 logs.go:274] 0 containers: []
W0108 21:04:20.360428 124694 logs.go:276] No container was found matching "kube-scheduler"
I0108 21:04:20.360436 124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0108 21:04:20.360477 124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0108 21:04:20.384216 124694 cri.go:87] found id: ""
I0108 21:04:20.384244 124694 logs.go:274] 0 containers: []
W0108 21:04:20.384251 124694 logs.go:276] No container was found matching "kube-proxy"
I0108 21:04:20.384259 124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0108 21:04:20.384307 124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0108 21:04:20.407359 124694 cri.go:87] found id: ""
I0108 21:04:20.407385 124694 logs.go:274] 0 containers: []
W0108 21:04:20.407391 124694 logs.go:276] No container was found matching "kubernetes-dashboard"
I0108 21:04:20.407397 124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0108 21:04:20.407446 124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0108 21:04:20.429513 124694 cri.go:87] found id: ""
I0108 21:04:20.429538 124694 logs.go:274] 0 containers: []
W0108 21:04:20.429547 124694 logs.go:276] No container was found matching "storage-provisioner"
I0108 21:04:20.429554 124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0108 21:04:20.429592 124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0108 21:04:20.452750 124694 cri.go:87] found id: ""
I0108 21:04:20.452771 124694 logs.go:274] 0 containers: []
W0108 21:04:20.452777 124694 logs.go:276] No container was found matching "kube-controller-manager"
I0108 21:04:20.452786 124694 logs.go:123] Gathering logs for kubelet ...
I0108 21:04:20.452797 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0108 21:04:20.510605 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.937893 4359 projected.go:192] Error preparing data for projected volume kube-api-access-svv2t for pod kube-system/kube-proxy-wmrz2: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.511028 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.937978 4359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35e9935b-759b-4c18-9d0b-2c0daaab9a1e-kube-api-access-svv2t podName:35e9935b-759b-4c18-9d0b-2c0daaab9a1e nodeName:}" failed. No retries permitted until 2023-01-08 21:00:11.937956077 +0000 UTC m=+9.792765068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-svv2t" (UniqueName: "kubernetes.io/projected/35e9935b-759b-4c18-9d0b-2c0daaab9a1e-kube-api-access-svv2t") pod "kube-proxy-wmrz2" (UID: "35e9935b-759b-4c18-9d0b-2c0daaab9a1e") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.511172 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: W0108 21:00:09.938038 4359 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.511334 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938056 4359 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.511496 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: W0108 21:00:09.938110 4359 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.511664 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938117 4359 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.511857 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938151 4359 projected.go:192] Error preparing data for projected volume kube-api-access-wvwgn for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.512266 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938177 4359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdbd16cd-b53b-4309-ad17-7915a6d7b693-kube-api-access-wvwgn podName:bdbd16cd-b53b-4309-ad17-7915a6d7b693 nodeName:}" failed. No retries permitted until 2023-01-08 21:00:10.938168618 +0000 UTC m=+8.792977602 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wvwgn" (UniqueName: "kubernetes.io/projected/bdbd16cd-b53b-4309-ad17-7915a6d7b693-kube-api-access-wvwgn") pod "storage-provisioner" (UID: "bdbd16cd-b53b-4309-ad17-7915a6d7b693") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.512442 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938217 4359 projected.go:192] Error preparing data for projected volume kube-api-access-s5nz9 for pod kube-system/kindnet-mtvg5: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.512847 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938249 4359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1257f157-44a7-41fe-9d98-48b85ce53a40-kube-api-access-s5nz9 podName:1257f157-44a7-41fe-9d98-48b85ce53a40 nodeName:}" failed. No retries permitted until 2023-01-08 21:00:10.938238341 +0000 UTC m=+8.793047329 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s5nz9" (UniqueName: "kubernetes.io/projected/1257f157-44a7-41fe-9d98-48b85ce53a40-kube-api-access-s5nz9") pod "kindnet-mtvg5" (UID: "1257f157-44a7-41fe-9d98-48b85ce53a40") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.513031 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938309 4359 projected.go:192] Error preparing data for projected volume kube-api-access-9t8jr for pod kube-system/coredns-6d4b75cb6d-48vmf: failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.513475 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938332 4359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d43c5f88-44b8-4ab6-bc5b-f2883eda56e2-kube-api-access-9t8jr podName:d43c5f88-44b8-4ab6-bc5b-f2883eda56e2 nodeName:}" failed. No retries permitted until 2023-01-08 21:00:10.938325487 +0000 UTC m=+8.793134472 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9t8jr" (UniqueName: "kubernetes.io/projected/d43c5f88-44b8-4ab6-bc5b-f2883eda56e2-kube-api-access-9t8jr") pod "coredns-6d4b75cb6d-48vmf" (UID: "d43c5f88-44b8-4ab6-bc5b-f2883eda56e2") : failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.513628 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: W0108 21:00:09.938363 4359 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.513802 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938372 4359 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
I0108 21:04:20.534040 124694 logs.go:123] Gathering logs for dmesg ...
I0108 21:04:20.534063 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0108 21:04:20.547468 124694 logs.go:123] Gathering logs for describe nodes ...
I0108 21:04:20.547515 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0108 21:04:20.836897 124694 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0108 21:04:20.836920 124694 logs.go:123] Gathering logs for containerd ...
I0108 21:04:20.836933 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0108 21:04:20.891961 124694 logs.go:123] Gathering logs for container status ...
I0108 21:04:20.891999 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0108 21:04:20.917568 124694 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1025-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W0108 21:04:20.173147 6979 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W0108 21:04:20.917600 124694 out.go:239] *
*
W0108 21:04:20.917764 124694 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1025-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W0108 21:04:20.173147 6979 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1025-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W0108 21:04:20.173147 6979 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W0108 21:04:20.917788 124694 out.go:239] *
*
W0108 21:04:20.918668 124694 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0108 21:04:20.921286 124694 out.go:177] X Problems detected in kubelet:
I0108 21:04:20.922717 124694 out.go:177] Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.937893 4359 projected.go:192] Error preparing data for projected volume kube-api-access-svv2t for pod kube-system/kube-proxy-wmrz2: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
I0108 21:04:20.925364 124694 out.go:177] Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.937978 4359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35e9935b-759b-4c18-9d0b-2c0daaab9a1e-kube-api-access-svv2t podName:35e9935b-759b-4c18-9d0b-2c0daaab9a1e nodeName:}" failed. No retries permitted until 2023-01-08 21:00:11.937956077 +0000 UTC m=+9.792765068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-svv2t" (UniqueName: "kubernetes.io/projected/35e9935b-759b-4c18-9d0b-2c0daaab9a1e-kube-api-access-svv2t") pod "kube-proxy-wmrz2" (UID: "35e9935b-759b-4c18-9d0b-2c0daaab9a1e") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
I0108 21:04:20.926971 124694 out.go:177] Jan 08 21:00:10 test-preload-205820 kubelet[4359]: W0108 21:00:09.938038 4359 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
I0108 21:04:20.929431 124694 out.go:177]
W0108 21:04:20.930937 124694 out.go:239] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1025-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W0108 21:04:20.173147 6979 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1025-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W0108 21:04:20.173147 6979 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W0108 21:04:20.931018 124694 out.go:239] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
* Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
W0108 21:04:20.931068 124694 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/5484
* Related issue: https://github.com/kubernetes/minikube/issues/5484
I0108 21:04:20.932735 124694 out.go:177]
** /stderr **
preload_test.go:69: out/minikube-linux-amd64 start -p test-preload-205820 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --container-runtime=containerd --kubernetes-version=v1.24.6 failed: exit status 81
panic.go:522: *** TestPreload FAILED at 2023-01-08 21:04:20.977007524 +0000 UTC m=+2223.120179816
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect test-preload-205820
helpers_test.go:235: (dbg) docker inspect test-preload-205820:
-- stdout --
[
{
"Id": "614931b1d191e00f7021dc1adb912c496e5d681efad786411f6c7e944eb761ce",
"Created": "2023-01-08T20:58:21.480695226Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 121415,
"ExitCode": 0,
"Error": "",
"StartedAt": "2023-01-08T20:58:22.055601402Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
"ResolvConfPath": "/var/lib/docker/containers/614931b1d191e00f7021dc1adb912c496e5d681efad786411f6c7e944eb761ce/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/614931b1d191e00f7021dc1adb912c496e5d681efad786411f6c7e944eb761ce/hostname",
"HostsPath": "/var/lib/docker/containers/614931b1d191e00f7021dc1adb912c496e5d681efad786411f6c7e944eb761ce/hosts",
"LogPath": "/var/lib/docker/containers/614931b1d191e00f7021dc1adb912c496e5d681efad786411f6c7e944eb761ce/614931b1d191e00f7021dc1adb912c496e5d681efad786411f6c7e944eb761ce-json.log",
"Name": "/test-preload-205820",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"test-preload-205820:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "test-preload-205820",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/ccd0153358d263b743a423c07ff20c700def661835b86fc81274e71554ff3780-init/diff:/var/lib/docker/overlay2/08b33854295261bb38a0074055b3dd7f2f489d35ea70ee5a594182ad1dc0fd2b/diff:/var/lib/docker/overlay2/5abe192e44c49d2a25560e4c1ae4a6ba5ec58100f8e1869d306a28b614caf414/diff:/var/lib/docker/overlay2/156a32df2dac676d7cd6558d5ad13be5054eb2fddc8d1b10137bf34bd6b1ddf7/diff:/var/lib/docker/overlay2/f3a5b8630133b5cdc48bbdb98f666efad221386dbd08f5fc096c7ba6d3f31667/diff:/var/lib/docker/overlay2/4d4ccd218e479874818175ec6c4e28f06f0084eed74b9a83af5a96e8bf41e6c7/diff:/var/lib/docker/overlay2/ab8a3247b29cb45a0024dbc2477a50b042f03e39064c09e23d21a2b5f331013c/diff:/var/lib/docker/overlay2/cd6161fa5d366910a9eb537894f374c49b0ba256c81a9c1ad3d4f4786d730668/diff:/var/lib/docker/overlay2/a26d88db409214dee405ed385f40b0d58e37ef670c26e65c1132eb27e3bb984d/diff:/var/lib/docker/overlay2/f76d833c20bf46fe7aec412f931039cdeb58b129552b97228c55fada8b05e63a/diff:/var/lib/docker/overlay2/584beb
80220462b8bc5a6c651735d55544ebef775c71b1a91b92235049a7e9da/diff:/var/lib/docker/overlay2/5209ebd508837f05291d5f22eff307db47f9c94691dd096cdc6ff6446db45461/diff:/var/lib/docker/overlay2/968fc302c0737742d4698bb0026b5cfc50c4b2934930db492c97b346fca2bd67/diff:/var/lib/docker/overlay2/b469d09ff82a75fb0f6dd04264a04b21d3d113c8577afc6dbc6a7dafe4022179/diff:/var/lib/docker/overlay2/2869116cf2f02bdc6170bde623a11022eba8e5c0f84630c9a0fd0a86483d0ed2/diff:/var/lib/docker/overlay2/ad4794ae637b0ce306963007b35562110fba050d9e3cd6196830448c749d7cc6/diff:/var/lib/docker/overlay2/a53b191897dfffb12ad2948cad16d668a0f7c2be5e3e6fd2556fe22d8d608879/diff:/var/lib/docker/overlay2/bbc7772821406640e29f442eb8752d92d7b72e8314035d6cf94255976b90642e/diff:/var/lib/docker/overlay2/5e3ac5600af83e3f6f756895fbd7a66fa97e799f659541508353078d057fd89b/diff:/var/lib/docker/overlay2/79c23e7084aa1de637cb09a7c07679d6c8d4ff6c6e7384f7f15cb549f8186fd6/diff:/var/lib/docker/overlay2/86ef1d4759376f1edb9cf43004ae59ba61200d79acea357b872de5247071531a/diff:/var/lib/d
ocker/overlay2/5c948a7514e051c2a3b4600944d475a674404c1f58b1bd7fd4d7b764be35c3e7/diff:/var/lib/docker/overlay2/71b25d81823337d776e0202f623b2f822a2ebba0e8f935e3211da145bbdac22d/diff:/var/lib/docker/overlay2/4261f26f8c837e7f8f07ba0cd2aca7274aa96561ac0bd104af28d44c15678f8d/diff:/var/lib/docker/overlay2/f2a71b21156c3ad367bb9fe8bde3e75df6681f9e505d34d8e1b635e50aa0e29b/diff:/var/lib/docker/overlay2/9ef1fde443b60bf4cda7ef2a51adcacef4383280f30eb6ee14e23959a5395e18/diff:/var/lib/docker/overlay2/aa727d5a432452b3e1ed2fc994db0bf2d5a47af89bc768a1e2b633baa8319025/diff:/var/lib/docker/overlay2/dc9499b738334b5626de1363086d853c4e6c2c5b2d51f58e8e1ed8346cc39f45/diff:/var/lib/docker/overlay2/03cc962675627b454b9857b5ffcac439143653d35cd42ad886571fbecd955c6c/diff:/var/lib/docker/overlay2/32064543ecb8bae609fda6b220233d7b73f4e601bd320e1d389451fb120c2459/diff:/var/lib/docker/overlay2/a0c00493cc3fdbdaa87f61faf83c8fcedb9ee4d5b24ecc35eceba0429b6dcc88/diff:/var/lib/docker/overlay2/18cbe77f8dab8144889e1deb46a8795fd207afbdfd9ea3fdeb526b72d7e
4a4cb/diff:/var/lib/docker/overlay2/dca4f5d0bca28bc75fa764afd39d07871f051c2d362fc2850ca957b975f75555/diff:/var/lib/docker/overlay2/cd071d8f661b9ff9c6d861fce06842188fda3d307062488b93944a97883f4a67/diff:/var/lib/docker/overlay2/5a0c166a1e11d0bb9f7de4a882be960f70ac972e91b3ab8fd83fc6eec2bf1d5e/diff:/var/lib/docker/overlay2/43eb3ae44d3bf3e6cbf5a66accacb1247ef210213c56a67097949d06794f4bbc/diff:/var/lib/docker/overlay2/b60cb5b525f4aed6b27cf2e451596e134d1185254f3798d5a91357814552e273/diff:/var/lib/docker/overlay2/e5c8c85e78fedb1981436f368ad70bdfe38d240dddceac3d103e1d09bc252cef/diff:/var/lib/docker/overlay2/7bf66ab7d17eda3c6760fe43588afa33bb56279f5a7202de9cc98b4711f0da5b/diff:/var/lib/docker/overlay2/d5dbd6e8a20b8681e7b55b57c382c8c9029da8496cc9e0f4d1eb3b224583535e/diff:/var/lib/docker/overlay2/de3ca9b514aa5071c0fd39d0d80fa080bce6e82e872747ec916ef7bf379c0c2a/diff:/var/lib/docker/overlay2/80bff03df7f2a6639d98a2c99d1cc55a36dc9110501452bfd7bce3bce496541e/diff:/var/lib/docker/overlay2/471bea5a4732f210c3fcf87a02643b615a96f6
39d01ac2c3e71f7886b23733a5/diff:/var/lib/docker/overlay2/29dd84326e3ef52c85008a60a7ff2225fd18f1d51ec3e60f143a0d93e442469d/diff:/var/lib/docker/overlay2/05d035eba3e40a396836d9917064869afd7ec70920f526f8903dc8f5a9f2dce3/diff",
"MergedDir": "/var/lib/docker/overlay2/ccd0153358d263b743a423c07ff20c700def661835b86fc81274e71554ff3780/merged",
"UpperDir": "/var/lib/docker/overlay2/ccd0153358d263b743a423c07ff20c700def661835b86fc81274e71554ff3780/diff",
"WorkDir": "/var/lib/docker/overlay2/ccd0153358d263b743a423c07ff20c700def661835b86fc81274e71554ff3780/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "test-preload-205820",
"Source": "/var/lib/docker/volumes/test-preload-205820/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "test-preload-205820",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "test-preload-205820",
"name.minikube.sigs.k8s.io": "test-preload-205820",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "746f7405522a8c28f5e57d7a7fda75b53b92a8763f8f128a0e9615d82bed0a8b",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32892"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32891"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32888"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32890"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32889"
}
]
},
"SandboxKey": "/var/run/docker/netns/746f7405522a",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"test-preload-205820": {
"IPAMConfig": {
"IPv4Address": "192.168.67.2"
},
"Links": null,
"Aliases": [
"614931b1d191",
"test-preload-205820"
],
"NetworkID": "6987da7ab2da74011fe53784d265ab03133276db3c92cc5963e6695e7e04136b",
"EndpointID": "4468b674e4ce5b5362e41cac0caccb8c7bd01864fa22309b582182119ff6357a",
"Gateway": "192.168.67.1",
"IPAddress": "192.168.67.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:43:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-205820 -n test-preload-205820
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-205820 -n test-preload-205820: exit status 2 (344.455697ms)
-- stdout --
Running
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p test-preload-205820 logs -n 25
helpers_test.go:252: TestPreload logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| ssh | multinode-205018 ssh -n | multinode-205018 | jenkins | v1.28.0 | 08 Jan 23 20:52 UTC | 08 Jan 23 20:52 UTC |
| | multinode-205018-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-205018 cp multinode-205018-m03:/home/docker/cp-test.txt | multinode-205018 | jenkins | v1.28.0 | 08 Jan 23 20:52 UTC | 08 Jan 23 20:52 UTC |
| | multinode-205018:/home/docker/cp-test_multinode-205018-m03_multinode-205018.txt | | | | | |
| ssh | multinode-205018 ssh -n | multinode-205018 | jenkins | v1.28.0 | 08 Jan 23 20:52 UTC | 08 Jan 23 20:52 UTC |
| | multinode-205018-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-205018 ssh -n multinode-205018 sudo cat | multinode-205018 | jenkins | v1.28.0 | 08 Jan 23 20:52 UTC | 08 Jan 23 20:52 UTC |
| | /home/docker/cp-test_multinode-205018-m03_multinode-205018.txt | | | | | |
| cp | multinode-205018 cp multinode-205018-m03:/home/docker/cp-test.txt | multinode-205018 | jenkins | v1.28.0 | 08 Jan 23 20:52 UTC | 08 Jan 23 20:52 UTC |
| | multinode-205018-m02:/home/docker/cp-test_multinode-205018-m03_multinode-205018-m02.txt | | | | | |
| ssh | multinode-205018 ssh -n | multinode-205018 | jenkins | v1.28.0 | 08 Jan 23 20:52 UTC | 08 Jan 23 20:52 UTC |
| | multinode-205018-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-205018 ssh -n multinode-205018-m02 sudo cat | multinode-205018 | jenkins | v1.28.0 | 08 Jan 23 20:52 UTC | 08 Jan 23 20:52 UTC |
| | /home/docker/cp-test_multinode-205018-m03_multinode-205018-m02.txt | | | | | |
| node | multinode-205018 node stop m03 | multinode-205018 | jenkins | v1.28.0 | 08 Jan 23 20:52 UTC | 08 Jan 23 20:52 UTC |
| node | multinode-205018 node start | multinode-205018 | jenkins | v1.28.0 | 08 Jan 23 20:52 UTC | 08 Jan 23 20:53 UTC |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-205018 | multinode-205018 | jenkins | v1.28.0 | 08 Jan 23 20:53 UTC | |
| stop | -p multinode-205018 | multinode-205018 | jenkins | v1.28.0 | 08 Jan 23 20:53 UTC | 08 Jan 23 20:53 UTC |
| start | -p multinode-205018 | multinode-205018 | jenkins | v1.28.0 | 08 Jan 23 20:53 UTC | 08 Jan 23 20:55 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-205018 | multinode-205018 | jenkins | v1.28.0 | 08 Jan 23 20:55 UTC | |
| node | multinode-205018 node delete | multinode-205018 | jenkins | v1.28.0 | 08 Jan 23 20:55 UTC | 08 Jan 23 20:55 UTC |
| | m03 | | | | | |
| stop | multinode-205018 stop | multinode-205018 | jenkins | v1.28.0 | 08 Jan 23 20:55 UTC | 08 Jan 23 20:56 UTC |
| start | -p multinode-205018 | multinode-205018 | jenkins | v1.28.0 | 08 Jan 23 20:56 UTC | 08 Jan 23 20:57 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| node | list -p multinode-205018 | multinode-205018 | jenkins | v1.28.0 | 08 Jan 23 20:57 UTC | |
| start | -p multinode-205018-m02 | multinode-205018-m02 | jenkins | v1.28.0 | 08 Jan 23 20:57 UTC | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p multinode-205018-m03 | multinode-205018-m03 | jenkins | v1.28.0 | 08 Jan 23 20:57 UTC | 08 Jan 23 20:58 UTC |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| node | add -p multinode-205018 | multinode-205018 | jenkins | v1.28.0 | 08 Jan 23 20:58 UTC | |
| delete | -p multinode-205018-m03 | multinode-205018-m03 | jenkins | v1.28.0 | 08 Jan 23 20:58 UTC | 08 Jan 23 20:58 UTC |
| delete | -p multinode-205018 | multinode-205018 | jenkins | v1.28.0 | 08 Jan 23 20:58 UTC | 08 Jan 23 20:58 UTC |
| start | -p test-preload-205820 | test-preload-205820 | jenkins | v1.28.0 | 08 Jan 23 20:58 UTC | 08 Jan 23 20:59 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.4 | | | | | |
| ssh | -p test-preload-205820 | test-preload-205820 | jenkins | v1.28.0 | 08 Jan 23 20:59 UTC | 08 Jan 23 20:59 UTC |
| | -- sudo crictl pull | | | | | |
| | gcr.io/k8s-minikube/busybox | | | | | |
| start | -p test-preload-205820 | test-preload-205820 | jenkins | v1.28.0 | 08 Jan 23 20:59 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --wait=true --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.6 | | | | | |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/01/08 20:59:15
Running on machine: ubuntu-20-agent-10
Binary: Built with gc go1.19.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0108 20:59:15.922988 124694 out.go:296] Setting OutFile to fd 1 ...
I0108 20:59:15.923190 124694 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:59:15.923199 124694 out.go:309] Setting ErrFile to fd 2...
I0108 20:59:15.923206 124694 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:59:15.923344 124694 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3617/.minikube/bin
I0108 20:59:15.923946 124694 out.go:303] Setting JSON to false
I0108 20:59:15.925106 124694 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2505,"bootTime":1673209051,"procs":425,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1025-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0108 20:59:15.925171 124694 start.go:135] virtualization: kvm guest
I0108 20:59:15.927955 124694 out.go:177] * [test-preload-205820] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
I0108 20:59:15.929374 124694 notify.go:220] Checking for updates...
I0108 20:59:15.929404 124694 out.go:177] - MINIKUBE_LOCATION=15565
I0108 20:59:15.931238 124694 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0108 20:59:15.932840 124694 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15565-3617/kubeconfig
I0108 20:59:15.935379 124694 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3617/.minikube
I0108 20:59:15.937020 124694 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0108 20:59:15.939039 124694 config.go:180] Loaded profile config "test-preload-205820": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0108 20:59:15.941039 124694 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
I0108 20:59:15.942409 124694 driver.go:365] Setting default libvirt URI to qemu:///system
I0108 20:59:15.970300 124694 docker.go:137] docker version: linux-20.10.22
I0108 20:59:15.970401 124694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0108 20:59:16.062763 124694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2023-01-08 20:59:15.989379004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0108 20:59:16.062862 124694 docker.go:254] overlay module found
I0108 20:59:16.065073 124694 out.go:177] * Using the docker driver based on existing profile
I0108 20:59:16.066398 124694 start.go:294] selected driver: docker
I0108 20:59:16.066409 124694 start.go:838] validating driver "docker" against &{Name:test-preload-205820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-205820 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0108 20:59:16.066519 124694 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0108 20:59:16.067271 124694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0108 20:59:16.159790 124694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2023-01-08 20:59:16.087078013 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1025-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.14.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0108 20:59:16.160075 124694 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0108 20:59:16.160096 124694 cni.go:95] Creating CNI manager for ""
I0108 20:59:16.160103 124694 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0108 20:59:16.160116 124694 start_flags.go:317] config:
{Name:test-preload-205820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-205820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0108 20:59:16.162204 124694 out.go:177] * Starting control plane node test-preload-205820 in cluster test-preload-205820
I0108 20:59:16.165845 124694 cache.go:120] Beginning downloading kic base image for docker with containerd
I0108 20:59:16.167544 124694 out.go:177] * Pulling base image ...
I0108 20:59:16.169023 124694 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I0108 20:59:16.169127 124694 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
I0108 20:59:16.191569 124694 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
I0108 20:59:16.191596 124694 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
I0108 20:59:16.488573 124694 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
I0108 20:59:16.488598 124694 cache.go:57] Caching tarball of preloaded images
I0108 20:59:16.488917 124694 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I0108 20:59:16.491216 124694 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
I0108 20:59:16.492629 124694 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I0108 20:59:17.039968 124694 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
I0108 20:59:32.016227 124694 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I0108 20:59:32.016331 124694 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I0108 20:59:32.888834 124694 cache.go:60] Finished verifying existence of preloaded tar for v1.24.6 on containerd
I0108 20:59:32.888992 124694 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/config.json ...
I0108 20:59:32.889209 124694 cache.go:193] Successfully downloaded all kic artifacts
I0108 20:59:32.889259 124694 start.go:364] acquiring machines lock for test-preload-205820: {Name:mk27a98eef575d3995d47e9b2c3065d636302b25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0108 20:59:32.889363 124694 start.go:368] acquired machines lock for "test-preload-205820" in 75.02µs
I0108 20:59:32.889385 124694 start.go:96] Skipping create...Using existing machine configuration
I0108 20:59:32.889395 124694 fix.go:55] fixHost starting:
I0108 20:59:32.889636 124694 cli_runner.go:164] Run: docker container inspect test-preload-205820 --format={{.State.Status}}
I0108 20:59:32.913783 124694 fix.go:103] recreateIfNeeded on test-preload-205820: state=Running err=<nil>
W0108 20:59:32.913829 124694 fix.go:129] unexpected machine state, will restart: <nil>
I0108 20:59:32.917800 124694 out.go:177] * Updating the running docker "test-preload-205820" container ...
I0108 20:59:32.919462 124694 machine.go:88] provisioning docker machine ...
I0108 20:59:32.919513 124694 ubuntu.go:169] provisioning hostname "test-preload-205820"
I0108 20:59:32.919568 124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
I0108 20:59:32.942125 124694 main.go:134] libmachine: Using SSH client type: native
I0108 20:59:32.942374 124694 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil> [] 0s} 127.0.0.1 32892 <nil> <nil>}
I0108 20:59:32.942400 124694 main.go:134] libmachine: About to run SSH command:
sudo hostname test-preload-205820 && echo "test-preload-205820" | sudo tee /etc/hostname
I0108 20:59:33.063328 124694 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-205820
I0108 20:59:33.063392 124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
I0108 20:59:33.086668 124694 main.go:134] libmachine: Using SSH client type: native
I0108 20:59:33.086810 124694 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil> [] 0s} 127.0.0.1 32892 <nil> <nil>}
I0108 20:59:33.086827 124694 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\stest-preload-205820' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-205820/g' /etc/hosts;
else
echo '127.0.1.1 test-preload-205820' | sudo tee -a /etc/hosts;
fi
fi
I0108 20:59:33.203200 124694 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0108 20:59:33.203231 124694 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3617/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3617/.minikube}
I0108 20:59:33.203257 124694 ubuntu.go:177] setting up certificates
I0108 20:59:33.203273 124694 provision.go:83] configureAuth start
I0108 20:59:33.203326 124694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-205820
I0108 20:59:33.226487 124694 provision.go:138] copyHostCerts
I0108 20:59:33.226543 124694 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem, removing ...
I0108 20:59:33.226550 124694 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem
I0108 20:59:33.226616 124694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/ca.pem (1078 bytes)
I0108 20:59:33.226699 124694 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem, removing ...
I0108 20:59:33.226708 124694 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem
I0108 20:59:33.226734 124694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/cert.pem (1123 bytes)
I0108 20:59:33.226788 124694 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem, removing ...
I0108 20:59:33.226795 124694 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem
I0108 20:59:33.226817 124694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3617/.minikube/key.pem (1675 bytes)
I0108 20:59:33.226869 124694 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem org=jenkins.test-preload-205820 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-205820]
I0108 20:59:33.438802 124694 provision.go:172] copyRemoteCerts
I0108 20:59:33.438859 124694 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0108 20:59:33.438889 124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
I0108 20:59:33.462207 124694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/test-preload-205820/id_rsa Username:docker}
I0108 20:59:33.550321 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0108 20:59:33.566609 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0108 20:59:33.582624 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0108 20:59:33.598229 124694 provision.go:86] duration metric: configureAuth took 394.945613ms
I0108 20:59:33.598253 124694 ubuntu.go:193] setting minikube options for container-runtime
I0108 20:59:33.598410 124694 config.go:180] Loaded profile config "test-preload-205820": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
I0108 20:59:33.598423 124694 machine.go:91] provisioned docker machine in 678.92515ms
I0108 20:59:33.598432 124694 start.go:300] post-start starting for "test-preload-205820" (driver="docker")
I0108 20:59:33.598441 124694 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0108 20:59:33.598485 124694 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0108 20:59:33.598529 124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
I0108 20:59:33.620869 124694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/test-preload-205820/id_rsa Username:docker}
I0108 20:59:33.706833 124694 ssh_runner.go:195] Run: cat /etc/os-release
I0108 20:59:33.709432 124694 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0108 20:59:33.709452 124694 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0108 20:59:33.709460 124694 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0108 20:59:33.709466 124694 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0108 20:59:33.709473 124694 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/addons for local assets ...
I0108 20:59:33.709515 124694 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3617/.minikube/files for local assets ...
I0108 20:59:33.709584 124694 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem -> 103722.pem in /etc/ssl/certs
I0108 20:59:33.709657 124694 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0108 20:59:33.716065 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /etc/ssl/certs/103722.pem (1708 bytes)
I0108 20:59:33.732647 124694 start.go:303] post-start completed in 134.201143ms
I0108 20:59:33.732700 124694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0108 20:59:33.732750 124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
I0108 20:59:33.756085 124694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/test-preload-205820/id_rsa Username:docker}
I0108 20:59:33.835916 124694 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0108 20:59:33.839883 124694 fix.go:57] fixHost completed within 950.482339ms
I0108 20:59:33.839906 124694 start.go:83] releasing machines lock for "test-preload-205820", held for 950.52777ms
I0108 20:59:33.839991 124694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-205820
I0108 20:59:33.862646 124694 ssh_runner.go:195] Run: cat /version.json
I0108 20:59:33.862692 124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
I0108 20:59:33.862773 124694 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0108 20:59:33.862826 124694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-205820
I0108 20:59:33.886491 124694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/test-preload-205820/id_rsa Username:docker}
I0108 20:59:33.886912 124694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32892 SSHKeyPath:/home/jenkins/minikube-integration/15565-3617/.minikube/machines/test-preload-205820/id_rsa Username:docker}
I0108 20:59:33.984937 124694 ssh_runner.go:195] Run: systemctl --version
I0108 20:59:33.988836 124694 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0108 20:59:34.000114 124694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0108 20:59:34.008642 124694 docker.go:189] disabling docker service ...
I0108 20:59:34.008693 124694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0108 20:59:34.017530 124694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0108 20:59:34.025801 124694 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0108 20:59:34.122708 124694 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0108 20:59:34.217961 124694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0108 20:59:34.226765 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0108 20:59:34.238797 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
I0108 20:59:34.246194 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
I0108 20:59:34.253558 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
I0108 20:59:34.261040 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
I0108 20:59:34.268683 124694 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0108 20:59:34.274677 124694 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0108 20:59:34.280603 124694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 20:59:34.370755 124694 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0108 20:59:34.445671 124694 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
I0108 20:59:34.445735 124694 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0108 20:59:34.449843 124694 start.go:472] Will wait 60s for crictl version
I0108 20:59:34.449900 124694 ssh_runner.go:195] Run: sudo crictl version
I0108 20:59:34.476629 124694 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:
stderr:
time="2023-01-08T20:59:34Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
I0108 20:59:45.523600 124694 ssh_runner.go:195] Run: sudo crictl version
I0108 20:59:45.547086 124694 start.go:481] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.6.10
RuntimeApiVersion: v1alpha2
I0108 20:59:45.547154 124694 ssh_runner.go:195] Run: containerd --version
I0108 20:59:45.569590 124694 ssh_runner.go:195] Run: containerd --version
I0108 20:59:45.594001 124694 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.10 ...
I0108 20:59:45.595715 124694 cli_runner.go:164] Run: docker network inspect test-preload-205820 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0108 20:59:45.617246 124694 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I0108 20:59:45.620504 124694 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I0108 20:59:45.620559 124694 ssh_runner.go:195] Run: sudo crictl images --output json
I0108 20:59:45.642354 124694 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
I0108 20:59:45.642439 124694 ssh_runner.go:195] Run: which lz4
I0108 20:59:45.645255 124694 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0108 20:59:45.648306 124694 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I0108 20:59:45.648333 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
I0108 20:59:46.604476 124694 containerd.go:496] Took 0.959252 seconds to copy over tarball
I0108 20:59:46.604556 124694 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0108 20:59:49.388621 124694 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.784042744s)
I0108 20:59:49.388652 124694 containerd.go:503] Took 2.784153 seconds t extract the tarball
I0108 20:59:49.388661 124694 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0108 20:59:49.410719 124694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 20:59:49.511828 124694 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0108 20:59:49.595221 124694 ssh_runner.go:195] Run: sudo crictl images --output json
I0108 20:59:49.633196 124694 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.6 k8s.gcr.io/kube-controller-manager:v1.24.6 k8s.gcr.io/kube-scheduler:v1.24.6 k8s.gcr.io/kube-proxy:v1.24.6 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
I0108 20:59:49.633289 124694 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0108 20:59:49.633307 124694 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.6
I0108 20:59:49.633331 124694 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.6
I0108 20:59:49.633356 124694 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
I0108 20:59:49.633443 124694 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
I0108 20:59:49.633489 124694 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
I0108 20:59:49.633318 124694 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.6
I0108 20:59:49.633821 124694 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.6
I0108 20:59:49.634498 124694 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.6: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.6
I0108 20:59:49.634524 124694 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
I0108 20:59:49.634567 124694 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
I0108 20:59:49.634498 124694 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
I0108 20:59:49.634576 124694 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0108 20:59:49.634592 124694 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.6: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.6
I0108 20:59:49.634597 124694 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.6: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.6
I0108 20:59:49.634594 124694 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.6: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.6
I0108 20:59:50.047554 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
I0108 20:59:50.082929 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6"
I0108 20:59:50.099888 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
I0108 20:59:50.103323 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6"
I0108 20:59:50.117424 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
I0108 20:59:50.146323 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.6"
I0108 20:59:50.152220 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.6"
I0108 20:59:50.398896 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
I0108 20:59:50.629706 124694 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
I0108 20:59:50.629756 124694 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
I0108 20:59:50.629794 124694 ssh_runner.go:195] Run: which crictl
I0108 20:59:50.816705 124694 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.6" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.6" does not exist at hash "c6c20157a42337ecb7675be59e1dc34bc5a91288c7eeac1e30ec97767a9055eb" in container runtime
I0108 20:59:50.816826 124694 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.6
I0108 20:59:50.816908 124694 ssh_runner.go:195] Run: which crictl
I0108 20:59:50.834757 124694 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
I0108 20:59:50.834807 124694 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
I0108 20:59:50.834848 124694 ssh_runner.go:195] Run: which crictl
I0108 20:59:50.922638 124694 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.6" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.6" does not exist at hash "c786c777a4e1c21907e77042428837645fa382d3bd14925cf78f0d163d6d332e" in container runtime
I0108 20:59:50.922682 124694 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.6
I0108 20:59:50.922719 124694 ssh_runner.go:195] Run: which crictl
I0108 20:59:50.934129 124694 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
I0108 20:59:51.000970 124694 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
I0108 20:59:50.942667 124694 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.6" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.6" does not exist at hash "0bb39497ab33bb5f8aaff88ced53a5fcd360fcf5da647609619d4f5c8f1483d2" in container runtime
I0108 20:59:51.001020 124694 ssh_runner.go:195] Run: which crictl
I0108 20:59:51.001040 124694 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.6
I0108 20:59:51.001068 124694 ssh_runner.go:195] Run: which crictl
I0108 20:59:51.015918 124694 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.6" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.6" does not exist at hash "860f263331c9513ddab44d4d8a9a4a7304313b3aa0776decc1d7fc6acdd69ab0" in container runtime
I0108 20:59:51.015958 124694 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.6
I0108 20:59:51.016003 124694 ssh_runner.go:195] Run: which crictl
I0108 20:59:51.052154 124694 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
I0108 20:59:51.052200 124694 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I0108 20:59:51.052241 124694 ssh_runner.go:195] Run: which crictl
I0108 20:59:51.052242 124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
I0108 20:59:51.052305 124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6
I0108 20:59:51.052367 124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
I0108 20:59:51.052412 124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6
I0108 20:59:51.052474 124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
I0108 20:59:51.052542 124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6
I0108 20:59:52.140730 124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7: (1.088416372s)
I0108 20:59:52.140757 124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
I0108 20:59:52.140759 124694 ssh_runner.go:235] Completed: which crictl: (1.088481701s)
I0108 20:59:52.140801 124694 ssh_runner.go:235] Completed: which crictl: (1.124782782s)
I0108 20:59:52.140815 124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I0108 20:59:52.140840 124694 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
I0108 20:59:52.140885 124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6: (1.088559722s)
I0108 20:59:52.140843 124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6
I0108 20:59:52.140906 124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6
I0108 20:59:52.140996 124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6: (1.088560881s)
I0108 20:59:52.141009 124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0: (1.088624706s)
I0108 20:59:52.141014 124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6
I0108 20:59:52.141017 124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
I0108 20:59:52.141071 124694 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
I0108 20:59:52.141105 124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6: (1.088539031s)
I0108 20:59:52.141119 124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6
I0108 20:59:52.141068 124694 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6: (1.088569381s)
I0108 20:59:52.141133 124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
I0108 20:59:52.141193 124694 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
I0108 20:59:52.235063 124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
I0108 20:59:52.235158 124694 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
I0108 20:59:52.235188 124694 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
I0108 20:59:52.235208 124694 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.7
I0108 20:59:52.235211 124694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6
I0108 20:59:52.235244 124694 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
I0108 20:59:52.235262 124694 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
I0108 20:59:52.235301 124694 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
I0108 20:59:52.348684 124694 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
I0108 20:59:52.348714 124694 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.8.6
I0108 20:59:52.348759 124694 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
I0108 20:59:52.348772 124694 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
I0108 20:59:53.355117 124694 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6: (1.006333066s)
I0108 20:59:53.355138 124694 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
I0108 20:59:53.355161 124694 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
I0108 20:59:53.355197 124694 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
I0108 20:59:58.744440 124694 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (5.389207325s)
I0108 20:59:58.744469 124694 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
I0108 20:59:58.744495 124694 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I0108 20:59:58.744532 124694 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
I0108 20:59:59.645452 124694 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I0108 20:59:59.645514 124694 cache_images.go:92] LoadImages completed in 10.012283055s
W0108 20:59:59.645650 124694 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15565-3617/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6: no such file or directory
I0108 20:59:59.645712 124694 ssh_runner.go:195] Run: sudo crictl info
I0108 20:59:59.719369 124694 cni.go:95] Creating CNI manager for ""
I0108 20:59:59.719404 124694 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0108 20:59:59.719417 124694 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0108 20:59:59.719431 124694 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-205820 NodeName:test-preload-205820 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
I0108 20:59:59.719633 124694 kubeadm.go:163] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "test-preload-205820"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0108 20:59:59.719739 124694 kubeadm.go:962] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-205820 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.6 ClusterName:test-preload-205820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0108 20:59:59.719791 124694 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
I0108 20:59:59.726680 124694 binaries.go:44] Found k8s binaries, skipping transfer
I0108 20:59:59.726736 124694 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0108 20:59:59.734052 124694 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (512 bytes)
I0108 20:59:59.749257 124694 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0108 20:59:59.764256 124694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
I0108 20:59:59.823242 124694 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I0108 20:59:59.826766 124694 certs.go:54] Setting up /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820 for IP: 192.168.67.2
I0108 20:59:59.826880 124694 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key
I0108 20:59:59.826936 124694 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key
I0108 20:59:59.827034 124694 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/client.key
I0108 20:59:59.827114 124694 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/apiserver.key.c7fa3a9e
I0108 20:59:59.827165 124694 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/proxy-client.key
I0108 20:59:59.827281 124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem (1338 bytes)
W0108 20:59:59.827327 124694 certs.go:384] ignoring /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372_empty.pem, impossibly tiny 0 bytes
I0108 20:59:59.827342 124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca-key.pem (1675 bytes)
I0108 20:59:59.827372 124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/ca.pem (1078 bytes)
I0108 20:59:59.827409 124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/cert.pem (1123 bytes)
I0108 20:59:59.827438 124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/certs/home/jenkins/minikube-integration/15565-3617/.minikube/certs/key.pem (1675 bytes)
I0108 20:59:59.827512 124694 certs.go:388] found cert: /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem (1708 bytes)
I0108 20:59:59.828247 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0108 20:59:59.848605 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0108 20:59:59.867107 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0108 20:59:59.929393 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0108 20:59:59.947265 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0108 20:59:59.967659 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0108 20:59:59.986203 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0108 21:00:00.028839 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0108 21:00:00.054242 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0108 21:00:00.071784 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/certs/10372.pem --> /usr/share/ca-certificates/10372.pem (1338 bytes)
I0108 21:00:00.087997 124694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3617/.minikube/files/etc/ssl/certs/103722.pem --> /usr/share/ca-certificates/103722.pem (1708 bytes)
I0108 21:00:00.123064 124694 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0108 21:00:00.135539 124694 ssh_runner.go:195] Run: openssl version
I0108 21:00:00.140139 124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0108 21:00:00.147247 124694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0108 21:00:00.150148 124694 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 8 20:28 /usr/share/ca-certificates/minikubeCA.pem
I0108 21:00:00.150197 124694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0108 21:00:00.154652 124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0108 21:00:00.161321 124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10372.pem && ln -fs /usr/share/ca-certificates/10372.pem /etc/ssl/certs/10372.pem"
I0108 21:00:00.169127 124694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10372.pem
I0108 21:00:00.171911 124694 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 8 20:41 /usr/share/ca-certificates/10372.pem
I0108 21:00:00.171967 124694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10372.pem
I0108 21:00:00.176639 124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10372.pem /etc/ssl/certs/51391683.0"
I0108 21:00:00.182896 124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103722.pem && ln -fs /usr/share/ca-certificates/103722.pem /etc/ssl/certs/103722.pem"
I0108 21:00:00.189696 124694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103722.pem
I0108 21:00:00.210855 124694 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 8 20:41 /usr/share/ca-certificates/103722.pem
I0108 21:00:00.210904 124694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103722.pem
I0108 21:00:00.215636 124694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103722.pem /etc/ssl/certs/3ec20f2e.0"
I0108 21:00:00.222153 124694 kubeadm.go:396] StartCluster: {Name:test-preload-205820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-205820 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0108 21:00:00.222257 124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0108 21:00:00.222298 124694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0108 21:00:00.245669 124694 cri.go:87] found id: "43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4"
I0108 21:00:00.245696 124694 cri.go:87] found id: "3852802493079c7473ee812611ecf809b363dd7bd001d0400d405c7b881a6881"
I0108 21:00:00.245706 124694 cri.go:87] found id: "0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659"
I0108 21:00:00.245715 124694 cri.go:87] found id: ""
I0108 21:00:00.245772 124694 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I0108 21:00:00.277898 124694 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"065e765dea3af949569c66775e9e531e06244c3c2704b71286fe12821e219459","pid":1612,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/065e765dea3af949569c66775e9e531e06244c3c2704b71286fe12821e219459","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/065e765dea3af949569c66775e9e531e06244c3c2704b71286fe12821e219459/rootfs","created":"2023-01-08T20:58:44.075786098Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri.sandbox-id":"5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c","io.kubernetes.cri.sandbox-name":"etcd-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"08229dac28fdf32f0c8a99b5ed19879661ab064e2af7b68008a29005ece6a817","pid":2685,"status":"running","bundle":"/ru
n/containerd/io.containerd.runtime.v2.task/k8s.io/08229dac28fdf32f0c8a99b5ed19879661ab064e2af7b68008a29005ece6a817","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/08229dac28fdf32f0c8a99b5ed19879661ab064e2af7b68008a29005ece6a817/rootfs","created":"2023-01-08T20:59:11.277618302Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659","pid":3743,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659","rootfs":"/run/containerd/io.containerd.runtime.v2.task
/k8s.io/0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659/rootfs","created":"2023-01-08T20:59:53.536252787Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-48vmf","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb","pid":3679,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb/rootfs","created":"2023-01-08T20:59:52.952963041Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io
.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_bdbd16cd-b53b-4309-ad17-7915a6d7b693","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3","pid":2625,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3/rootfs","created":"2023-01-08T20:59:11.178050048Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubern
etes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-48vmf_d43c5f88-44b8-4ab6-bc5b-f2883eda56e2","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-48vmf","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae","pid":2211,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae/rootfs","created":"2023-01-08T20:59:03.662818408Z","annotations":{"io.kubernetes.cri.container-type":"sandbo
x","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-wmrz2_35e9935b-759b-4c18-9d0b-2c0daaab9a1e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-wmrz2","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2860f073544995b5c9f9bde40d0f7806528938816bcd39f0e73a07c55ea56d4c","pid":1658,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2860f073544995b5c9f9bde40d0f7806528938816bcd39f0e73a07c55ea56d4c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2860f073544995b5c9f9bde40d0f7806528938816bcd39f0e73a07c55ea56d4c/rootfs","created":"2023-01-08T20:58:44.120902562Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io
.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-scheduler:v1.24.4","io.kubernetes.cri.sandbox-id":"c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2d6cd5a5cf0dff47d97cfde8133a2e4146d1d9d16da6d9b609e7cfaec2870071","pid":2488,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d6cd5a5cf0dff47d97cfde8133a2e4146d1d9d16da6d9b609e7cfaec2870071","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d6cd5a5cf0dff47d97cfde8133a2e4146d1d9d16da6d9b609e7cfaec2870071/rootfs","created":"2023-01-08T20:59:07.90993923Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20221004-44d545d1","io.kubernetes.cri.sandbox-id":"414cc3f9f286440d220
4fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a","io.kubernetes.cri.sandbox-name":"kindnet-mtvg5","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"40db066158605ac15ff3157b7b668db3f5d83f46b55f96e4a63fd5f2f68fe4bd","pid":1657,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/40db066158605ac15ff3157b7b668db3f5d83f46b55f96e4a63fd5f2f68fe4bd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/40db066158605ac15ff3157b7b668db3f5d83f46b55f96e4a63fd5f2f68fe4bd/rootfs","created":"2023-01-08T20:58:44.121187645Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-apiserver:v1.24.4","io.kubernetes.cri.sandbox-id":"a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"oci
Version":"1.0.2-dev","id":"414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a","pid":2210,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a/rootfs","created":"2023-01-08T20:59:03.715705604Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-mtvg5_1257f157-44a7-41fe-9d98-48b85ce53a40","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-mtvg5","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersio
n":"1.0.2-dev","id":"41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265","pid":3646,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265/rootfs","created":"2023-01-08T20:59:52.914259586Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-mtvg5_1257f157-44a7-41fe-9d98-48b85ce53a40","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-mtvg5","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.
0.2-dev","id":"43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4","pid":4073,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4/rootfs","created":"2023-01-08T20:59:59.961439321Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-controller-manager:v1.24.4","io.kubernetes.cri.sandbox-id":"73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c","pid":1522,"status":"running","bundle":"/run/containerd/io.con
tainerd.runtime.v2.task/k8s.io/5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c/rootfs","created":"2023-01-08T20:58:43.912562088Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-205820_0106aa4904eaf95a3dcc4972da83cce0","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"67d1de92c3988f3adff4d3de84bdc8c6f1c660706e7dff753e1f453c0993d5d6","pid":1611,"status":"running","bundle":"/run/containerd/io.co
ntainerd.runtime.v2.task/k8s.io/67d1de92c3988f3adff4d3de84bdc8c6f1c660706e7dff753e1f453c0993d5d6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/67d1de92c3988f3adff4d3de84bdc8c6f1c660706e7dff753e1f453c0993d5d6/rootfs","created":"2023-01-08T20:58:44.078720095Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-controller-manager:v1.24.4","io.kubernetes.cri.sandbox-id":"d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111","pid":3579,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111","rootfs":"/run/containerd/io.containerd.runti
me.v2.task/k8s.io/7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111/rootfs","created":"2023-01-08T20:59:52.820275074Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-205820_0d00ad4c93ccd906fbcaecbff49fd727","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855","pid":3470,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855","rootfs":"/run/container
d/io.containerd.runtime.v2.task/k8s.io/73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855/rootfs","created":"2023-01-08T20:59:52.622948749Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-205820_3137f4b6a8ebd97ba2fc8851160ac0b1","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea74546fe19d4e0496d","pid":3442,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea7454
6fe19d4e0496d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea74546fe19d4e0496d/rootfs","created":"2023-01-08T20:59:52.55532244Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea74546fe19d4e0496d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-205820_044b6365f10644e1fab9f12495485e76","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3","pid":1520,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2b1f431ca407ff679e24ad09153258
04e13945554f39501c29ac7dcf5ab81f3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3/rootfs","created":"2023-01-08T20:58:43.914531641Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-205820_044b6365f10644e1fab9f12495485e76","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c5391d45b9b2afdd922ee8a2dd0be2e95703799a2791c0fd2d86eca5b63c6462","pid":2246,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5391d45b9
b2afdd922ee8a2dd0be2e95703799a2791c0fd2d86eca5b63c6462","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5391d45b9b2afdd922ee8a2dd0be2e95703799a2791c0fd2d86eca5b63c6462/rootfs","created":"2023-01-08T20:59:03.781592888Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-proxy:v1.24.4","io.kubernetes.cri.sandbox-id":"149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae","io.kubernetes.cri.sandbox-name":"kube-proxy-wmrz2","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65","pid":1521,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65/rootfs","cre
ated":"2023-01-08T20:58:43.918296824Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-205820_0d00ad4c93ccd906fbcaecbff49fd727","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf","pid":2624,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d4403
53ac76bf/rootfs","created":"2023-01-08T20:59:11.177965157Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_bdbd16cd-b53b-4309-ad17-7915a6d7b693","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c7b49e0771503321fcb6afb14a89ae5ab349eac5aefa3e765ae4aafd15863848","pid":2686,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7b49e0771503321fcb6afb14a89ae5ab349eac5aefa3e765ae4aafd15863848","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7b49e0771503321fcb6afb14a89ae5ab349eac5aefa3e765ae4aafd1586384
8/rootfs","created":"2023-01-08T20:59:11.277494639Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-48vmf","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f","pid":1523,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f/rootfs","created":"2023-01-08T20:58:43.918339088Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox
-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-205820_3137f4b6a8ebd97ba2fc8851160ac0b1","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67","pid":3427,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67/rootfs","created":"2023-01-08T20:59:52.545724953Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-peri
od":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-48vmf_d43c5f88-44b8-4ab6-bc5b-f2883eda56e2","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-48vmf","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0","pid":3658,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0/rootfs","created":"2023-01-08T20:59:52.920247257Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.san
dbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-wmrz2_35e9935b-759b-4c18-9d0b-2c0daaab9a1e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-wmrz2","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb","pid":3534,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb/rootfs","created":"2023-01-08T20:59:52.73552926Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-perio
d":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-205820_0106aa4904eaf95a3dcc4972da83cce0","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-205820","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"}]
I0108 21:00:00.278314 124694 cri.go:124] list returned 26 containers
I0108 21:00:00.278332 124694 cri.go:127] container: {ID:065e765dea3af949569c66775e9e531e06244c3c2704b71286fe12821e219459 Status:running}
I0108 21:00:00.278347 124694 cri.go:129] skipping 065e765dea3af949569c66775e9e531e06244c3c2704b71286fe12821e219459 - not in ps
I0108 21:00:00.278355 124694 cri.go:127] container: {ID:08229dac28fdf32f0c8a99b5ed19879661ab064e2af7b68008a29005ece6a817 Status:running}
I0108 21:00:00.278368 124694 cri.go:129] skipping 08229dac28fdf32f0c8a99b5ed19879661ab064e2af7b68008a29005ece6a817 - not in ps
I0108 21:00:00.278384 124694 cri.go:127] container: {ID:0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659 Status:running}
I0108 21:00:00.278397 124694 cri.go:133] skipping {0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659 running}: state = "running", want "paused"
I0108 21:00:00.278410 124694 cri.go:127] container: {ID:10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb Status:running}
I0108 21:00:00.278422 124694 cri.go:129] skipping 10315ae6c8ee15d86b291e89ea8c39a457d6d26ef37e9e450da97afb7e588dcb - not in ps
I0108 21:00:00.278433 124694 cri.go:127] container: {ID:12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3 Status:running}
I0108 21:00:00.278442 124694 cri.go:129] skipping 12f6098eaba4e7de829505942e44c4f4085ce3c42f70d927e3b5900856a0c4f3 - not in ps
I0108 21:00:00.278451 124694 cri.go:127] container: {ID:149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae Status:running}
I0108 21:00:00.278461 124694 cri.go:129] skipping 149d69e305eb3f1efaa60fe567837f8703c1872972fc27acc8f1c6d227988aae - not in ps
I0108 21:00:00.278471 124694 cri.go:127] container: {ID:2860f073544995b5c9f9bde40d0f7806528938816bcd39f0e73a07c55ea56d4c Status:running}
I0108 21:00:00.278482 124694 cri.go:129] skipping 2860f073544995b5c9f9bde40d0f7806528938816bcd39f0e73a07c55ea56d4c - not in ps
I0108 21:00:00.278493 124694 cri.go:127] container: {ID:2d6cd5a5cf0dff47d97cfde8133a2e4146d1d9d16da6d9b609e7cfaec2870071 Status:running}
I0108 21:00:00.278502 124694 cri.go:129] skipping 2d6cd5a5cf0dff47d97cfde8133a2e4146d1d9d16da6d9b609e7cfaec2870071 - not in ps
I0108 21:00:00.278512 124694 cri.go:127] container: {ID:40db066158605ac15ff3157b7b668db3f5d83f46b55f96e4a63fd5f2f68fe4bd Status:running}
I0108 21:00:00.278525 124694 cri.go:129] skipping 40db066158605ac15ff3157b7b668db3f5d83f46b55f96e4a63fd5f2f68fe4bd - not in ps
I0108 21:00:00.278536 124694 cri.go:127] container: {ID:414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a Status:running}
I0108 21:00:00.278547 124694 cri.go:129] skipping 414cc3f9f286440d2204fe71003531e96f7b5ffa2bef2badce6c2718bbfa118a - not in ps
I0108 21:00:00.278554 124694 cri.go:127] container: {ID:41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265 Status:running}
I0108 21:00:00.278566 124694 cri.go:129] skipping 41a770ac731a5ae4200eacc0455165d3f5abe4238ba1b809b9bbec6a877ae265 - not in ps
I0108 21:00:00.278576 124694 cri.go:127] container: {ID:43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4 Status:running}
I0108 21:00:00.278588 124694 cri.go:133] skipping {43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4 running}: state = "running", want "paused"
I0108 21:00:00.278603 124694 cri.go:127] container: {ID:5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c Status:running}
I0108 21:00:00.278615 124694 cri.go:129] skipping 5baf1c17d1d9c7eaff3acb5d5bd4124ef15339446834cad30a8d495124f7af8c - not in ps
I0108 21:00:00.278633 124694 cri.go:127] container: {ID:67d1de92c3988f3adff4d3de84bdc8c6f1c660706e7dff753e1f453c0993d5d6 Status:running}
I0108 21:00:00.278644 124694 cri.go:129] skipping 67d1de92c3988f3adff4d3de84bdc8c6f1c660706e7dff753e1f453c0993d5d6 - not in ps
I0108 21:00:00.278651 124694 cri.go:127] container: {ID:7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111 Status:running}
I0108 21:00:00.278660 124694 cri.go:129] skipping 7115d483ef35744dd9ad8782f5bc6319c62e46465fe05194cb6a22e76923e111 - not in ps
I0108 21:00:00.278667 124694 cri.go:127] container: {ID:73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855 Status:running}
I0108 21:00:00.278679 124694 cri.go:129] skipping 73eb503a8e9b1969ff25de7374afecfdfabf0a0f3184762e88e418587d2ef855 - not in ps
I0108 21:00:00.278687 124694 cri.go:127] container: {ID:833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea74546fe19d4e0496d Status:running}
I0108 21:00:00.278699 124694 cri.go:129] skipping 833ea9ab2785ebbe54bac37196e4ff5abd83fc20de316ea74546fe19d4e0496d - not in ps
I0108 21:00:00.278707 124694 cri.go:127] container: {ID:a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3 Status:running}
I0108 21:00:00.278719 124694 cri.go:129] skipping a2b1f431ca407ff679e24ad0915325804e13945554f39501c29ac7dcf5ab81f3 - not in ps
I0108 21:00:00.278729 124694 cri.go:127] container: {ID:c5391d45b9b2afdd922ee8a2dd0be2e95703799a2791c0fd2d86eca5b63c6462 Status:running}
I0108 21:00:00.278737 124694 cri.go:129] skipping c5391d45b9b2afdd922ee8a2dd0be2e95703799a2791c0fd2d86eca5b63c6462 - not in ps
I0108 21:00:00.278744 124694 cri.go:127] container: {ID:c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65 Status:running}
I0108 21:00:00.278756 124694 cri.go:129] skipping c5dd41e6d66bbfaa4f5efd0db05d450c6aa0ddbc1944776ae1b1426cd15cce65 - not in ps
I0108 21:00:00.278767 124694 cri.go:127] container: {ID:c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf Status:running}
I0108 21:00:00.278780 124694 cri.go:129] skipping c62f0b15060438341900e3b123cf94933897f3c4589324d6d97d440353ac76bf - not in ps
I0108 21:00:00.278790 124694 cri.go:127] container: {ID:c7b49e0771503321fcb6afb14a89ae5ab349eac5aefa3e765ae4aafd15863848 Status:running}
I0108 21:00:00.278804 124694 cri.go:129] skipping c7b49e0771503321fcb6afb14a89ae5ab349eac5aefa3e765ae4aafd15863848 - not in ps
I0108 21:00:00.278814 124694 cri.go:127] container: {ID:d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f Status:running}
I0108 21:00:00.278822 124694 cri.go:129] skipping d274a65e4bd997f4bd5835495bd5b6b904b71635c6f492b9dd0258c9bfc2139f - not in ps
I0108 21:00:00.278830 124694 cri.go:127] container: {ID:ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67 Status:running}
I0108 21:00:00.278842 124694 cri.go:129] skipping ea6e68395947e48e2f41281f97f4e91b5a7feffeb418b8f118a4ff6febc92f67 - not in ps
I0108 21:00:00.278852 124694 cri.go:127] container: {ID:ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0 Status:running}
I0108 21:00:00.278862 124694 cri.go:129] skipping ed412739ffbaa811fd7b639bd53de4bf186279e187d43438a1620ba9de9aa8a0 - not in ps
I0108 21:00:00.278872 124694 cri.go:127] container: {ID:ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb Status:running}
I0108 21:00:00.278883 124694 cri.go:129] skipping ef80b56694812594b25b4661c79674f45b6f1b36480e127253f7fddbaacea2cb - not in ps
I0108 21:00:00.278925 124694 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0108 21:00:00.286080 124694 kubeadm.go:411] found existing configuration files, will attempt cluster restart
I0108 21:00:00.286102 124694 kubeadm.go:627] restartCluster start
I0108 21:00:00.286141 124694 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0108 21:00:00.292256 124694 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0108 21:00:00.292769 124694 kubeconfig.go:92] found "test-preload-205820" server: "https://192.168.67.2:8443"
I0108 21:00:00.293379 124694 kapi.go:59] client config for test-preload-205820: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3617/.minikube/profiles/test-preload-205820/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3617/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0108 21:00:00.293896 124694 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0108 21:00:00.302755 124694 kubeadm.go:594] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml 2023-01-08 20:58:39.826861611 +0000
+++ /var/tmp/minikube/kubeadm.yaml.new 2023-01-08 20:59:59.816713998 +0000
@@ -38,7 +38,7 @@
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
-kubernetesVersion: v1.24.4
+kubernetesVersion: v1.24.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
-- /stdout --
I0108 21:00:00.302770 124694 kubeadm.go:1114] stopping kube-system containers ...
I0108 21:00:00.302789 124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0108 21:00:00.302824 124694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0108 21:00:00.329264 124694 cri.go:87] found id: "43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4"
I0108 21:00:00.329296 124694 cri.go:87] found id: "3852802493079c7473ee812611ecf809b363dd7bd001d0400d405c7b881a6881"
I0108 21:00:00.329308 124694 cri.go:87] found id: "0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659"
I0108 21:00:00.329317 124694 cri.go:87] found id: ""
I0108 21:00:00.329323 124694 cri.go:232] Stopping containers: [43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4 3852802493079c7473ee812611ecf809b363dd7bd001d0400d405c7b881a6881 0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659]
I0108 21:00:00.329366 124694 ssh_runner.go:195] Run: which crictl
I0108 21:00:00.332622 124694 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 43e20747281eb6a1d8e3a2a1dd2db96b0a024b3e4511920fe07fbe520cf691a4 3852802493079c7473ee812611ecf809b363dd7bd001d0400d405c7b881a6881 0f97b8f8a9f23644dc2d2182faa9c374ee0f59cb8c820d25edf58d15ff43d659
I0108 21:00:00.624345 124694 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0108 21:00:00.699226 124694 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0108 21:00:00.706356 124694 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5639 Jan 8 20:58 /etc/kubernetes/admin.conf
-rw------- 1 root root 5652 Jan 8 20:58 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2015 Jan 8 20:58 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5604 Jan 8 20:58 /etc/kubernetes/scheduler.conf
I0108 21:00:00.706408 124694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0108 21:00:00.713037 124694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0108 21:00:00.719542 124694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0108 21:00:00.725937 124694 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0108 21:00:00.725991 124694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0108 21:00:00.731944 124694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0108 21:00:00.738208 124694 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0108 21:00:00.738259 124694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0108 21:00:00.744328 124694 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0108 21:00:00.750786 124694 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0108 21:00:00.750804 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0108 21:00:00.994143 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0108 21:00:01.861835 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0108 21:00:02.144772 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0108 21:00:02.193739 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0108 21:00:02.312980 124694 api_server.go:51] waiting for apiserver process to appear ...
I0108 21:00:02.313046 124694 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 21:00:02.324151 124694 api_server.go:71] duration metric: took 11.177196ms to wait for apiserver process to appear ...
I0108 21:00:02.324188 124694 api_server.go:87] waiting for apiserver healthz status ...
I0108 21:00:02.324232 124694 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0108 21:00:02.329308 124694 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
ok
I0108 21:00:02.336848 124694 api_server.go:140] control plane version: v1.24.4
W0108 21:00:02.336885 124694 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I0108 21:00:02.838027 124694 api_server.go:140] control plane version: v1.24.4
W0108 21:00:02.838054 124694 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I0108 21:00:03.338861 124694 api_server.go:140] control plane version: v1.24.4
W0108 21:00:03.338897 124694 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I0108 21:00:03.837783 124694 api_server.go:140] control plane version: v1.24.4
W0108 21:00:03.837811 124694 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I0108 21:00:04.338312 124694 api_server.go:140] control plane version: v1.24.4
W0108 21:00:04.338339 124694 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
W0108 21:00:04.837852 124694 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W0108 21:00:05.337803 124694 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W0108 21:00:05.837782 124694 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W0108 21:00:06.338026 124694 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
I0108 21:00:09.935143 124694 api_server.go:140] control plane version: v1.24.6
I0108 21:00:09.935175 124694 api_server.go:130] duration metric: took 7.610979606s to wait for apiserver health ...
I0108 21:00:09.935185 124694 cni.go:95] Creating CNI manager for ""
I0108 21:00:09.935193 124694 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0108 21:00:09.937716 124694 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0108 21:00:09.939281 124694 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0108 21:00:10.021100 124694 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.6/kubectl ...
I0108 21:00:10.021132 124694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0108 21:00:10.133101 124694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0108 21:00:11.267907 124694 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.134775053s)
I0108 21:00:11.267939 124694 system_pods.go:43] waiting for kube-system pods to appear ...
I0108 21:00:11.274594 124694 system_pods.go:59] 6 kube-system pods found
I0108 21:00:11.274625 124694 system_pods.go:61] "coredns-6d4b75cb6d-48vmf" [d43c5f88-44b8-4ab6-bc5b-f2883eda56e2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0108 21:00:11.274637 124694 system_pods.go:61] "etcd-test-preload-205820" [f39e5236-110c-4587-8d2c-7da2d7802adc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0108 21:00:11.274644 124694 system_pods.go:61] "kindnet-mtvg5" [1257f157-44a7-41fe-9d98-48b85ce53a40] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0108 21:00:11.274653 124694 system_pods.go:61] "kube-proxy-wmrz2" [35e9935b-759b-4c18-9d0b-2c0daaab9a1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0108 21:00:11.274659 124694 system_pods.go:61] "kube-scheduler-test-preload-205820" [e0e1f824-50ae-4a61-b2c6-d7d2bb6f2edc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0108 21:00:11.274664 124694 system_pods.go:61] "storage-provisioner" [bdbd16cd-b53b-4309-ad17-7915a6d7b693] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0108 21:00:11.274669 124694 system_pods.go:74] duration metric: took 6.724913ms to wait for pod list to return data ...
I0108 21:00:11.274676 124694 node_conditions.go:102] verifying NodePressure condition ...
I0108 21:00:11.276970 124694 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0108 21:00:11.276995 124694 node_conditions.go:123] node cpu capacity is 8
I0108 21:00:11.277010 124694 node_conditions.go:105] duration metric: took 2.328282ms to run NodePressure ...
I0108 21:00:11.277035 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0108 21:00:11.436079 124694 kubeadm.go:763] waiting for restarted kubelet to initialise ...
I0108 21:00:11.439304 124694 kubeadm.go:778] kubelet initialised
I0108 21:00:11.439324 124694 kubeadm.go:779] duration metric: took 3.225451ms waiting for restarted kubelet to initialise ...
I0108 21:00:11.439330 124694 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0108 21:00:11.443291 124694 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-48vmf" in "kube-system" namespace to be "Ready" ...
I0108 21:00:13.452847 124694 pod_ready.go:102] pod "coredns-6d4b75cb6d-48vmf" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:15.453183 124694 pod_ready.go:102] pod "coredns-6d4b75cb6d-48vmf" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:17.953269 124694 pod_ready.go:92] pod "coredns-6d4b75cb6d-48vmf" in "kube-system" namespace has status "Ready":"True"
I0108 21:00:17.953294 124694 pod_ready.go:81] duration metric: took 6.509981854s waiting for pod "coredns-6d4b75cb6d-48vmf" in "kube-system" namespace to be "Ready" ...
I0108 21:00:17.953304 124694 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-205820" in "kube-system" namespace to be "Ready" ...
I0108 21:00:19.962548 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:21.963216 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:23.963314 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:26.462627 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:28.462965 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:30.962959 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:32.963068 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:35.463009 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:37.962454 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:40.462881 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:42.963385 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:45.462486 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:47.962468 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:49.962746 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:51.963178 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:54.463217 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:56.963323 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:00:59.463092 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:01.963156 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:04.463567 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:06.464930 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:08.962935 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:11.463300 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:13.962969 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:16.463128 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:18.963199 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:20.963826 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:23.462743 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:25.463158 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:27.962188 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:29.963079 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:32.464217 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:34.962854 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:37.462215 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:39.462584 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:41.462699 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:43.462915 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:45.963307 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:48.463544 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:50.963045 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:52.963170 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:55.462700 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:57.463256 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:01:59.962706 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:01.962779 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:03.963173 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:06.463371 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:08.463437 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:10.465071 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:12.963206 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:15.462589 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:17.462845 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:19.962938 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:21.963353 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:24.463222 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:26.463680 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:28.962594 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:30.962697 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:32.963185 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:35.462477 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:37.463216 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:39.962881 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:42.462539 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:44.462864 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:46.462968 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:48.962577 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:50.962760 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:53.464211 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:55.963075 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:02:58.463348 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:00.962702 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:02.962942 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:04.963134 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:07.462937 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:09.962917 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:12.462863 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:14.962823 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:17.462424 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:19.462845 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:21.962750 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:24.462946 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:26.463390 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:28.962923 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:30.963325 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:33.462969 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:35.963094 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:38.462979 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:40.963186 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:43.462328 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:45.462741 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:47.962483 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:49.963279 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:51.963334 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:54.462958 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:56.963433 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:03:58.963562 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:04:00.963753 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:04:03.463621 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:04:05.962769 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:04:07.962891 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:04:09.963338 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:04:12.462686 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:04:14.463369 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:04:16.963058 124694 pod_ready.go:102] pod "etcd-test-preload-205820" in "kube-system" namespace has status "Ready":"False"
I0108 21:04:17.957364 124694 pod_ready.go:81] duration metric: took 4m0.004045666s waiting for pod "etcd-test-preload-205820" in "kube-system" namespace to be "Ready" ...
E0108 21:04:17.957391 124694 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "etcd-test-preload-205820" in "kube-system" namespace to be "Ready" (will not retry!)
I0108 21:04:17.957419 124694 pod_ready.go:38] duration metric: took 4m6.518080998s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0108 21:04:17.957445 124694 kubeadm.go:631] restartCluster took 4m17.671337074s
W0108 21:04:17.957589 124694 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
I0108 21:04:17.957621 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0108 21:04:19.626459 124694 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.668819722s)
I0108 21:04:19.626516 124694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0108 21:04:19.635943 124694 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0108 21:04:19.642808 124694 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0108 21:04:19.642862 124694 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0108 21:04:19.649319 124694 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0108 21:04:19.649357 124694 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0108 21:04:19.686509 124694 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
I0108 21:04:19.686580 124694 kubeadm.go:317] [preflight] Running pre-flight checks
I0108 21:04:19.714334 124694 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
I0108 21:04:19.714410 124694 kubeadm.go:317] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1025-gcp[0m
I0108 21:04:19.714442 124694 kubeadm.go:317] [0;37mOS[0m: [0;32mLinux[0m
I0108 21:04:19.714480 124694 kubeadm.go:317] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0108 21:04:19.714520 124694 kubeadm.go:317] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0108 21:04:19.714613 124694 kubeadm.go:317] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0108 21:04:19.714688 124694 kubeadm.go:317] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0108 21:04:19.714729 124694 kubeadm.go:317] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0108 21:04:19.714777 124694 kubeadm.go:317] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0108 21:04:19.714821 124694 kubeadm.go:317] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0108 21:04:19.714864 124694 kubeadm.go:317] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0108 21:04:19.714905 124694 kubeadm.go:317] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0108 21:04:19.795815 124694 kubeadm.go:317] W0108 21:04:19.681686 6711 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I0108 21:04:19.796049 124694 kubeadm.go:317] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
I0108 21:04:19.796184 124694 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0108 21:04:19.796272 124694 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
I0108 21:04:19.796332 124694 kubeadm.go:317] [ERROR Port-2379]: Port 2379 is in use
I0108 21:04:19.796381 124694 kubeadm.go:317] [ERROR Port-2380]: Port 2380 is in use
I0108 21:04:19.796489 124694 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
I0108 21:04:19.796595 124694 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
W0108 21:04:19.796778 124694 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1025-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W0108 21:04:19.681686 6711 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
I0108 21:04:19.796820 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0108 21:04:20.125925 124694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0108 21:04:20.135276 124694 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0108 21:04:20.135332 124694 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0108 21:04:20.142002 124694 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0108 21:04:20.142045 124694 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0108 21:04:20.178099 124694 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
I0108 21:04:20.178220 124694 kubeadm.go:317] [preflight] Running pre-flight checks
I0108 21:04:20.203461 124694 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
I0108 21:04:20.203557 124694 kubeadm.go:317] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1025-gcp[0m
I0108 21:04:20.203613 124694 kubeadm.go:317] [0;37mOS[0m: [0;32mLinux[0m
I0108 21:04:20.203661 124694 kubeadm.go:317] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0108 21:04:20.203724 124694 kubeadm.go:317] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0108 21:04:20.203781 124694 kubeadm.go:317] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0108 21:04:20.203869 124694 kubeadm.go:317] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0108 21:04:20.203928 124694 kubeadm.go:317] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0108 21:04:20.203973 124694 kubeadm.go:317] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0108 21:04:20.204056 124694 kubeadm.go:317] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0108 21:04:20.204123 124694 kubeadm.go:317] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0108 21:04:20.204198 124694 kubeadm.go:317] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0108 21:04:20.268181 124694 kubeadm.go:317] W0108 21:04:20.173147 6979 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I0108 21:04:20.268365 124694 kubeadm.go:317] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
I0108 21:04:20.268449 124694 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0108 21:04:20.268528 124694 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
I0108 21:04:20.268566 124694 kubeadm.go:317] [ERROR Port-2379]: Port 2379 is in use
I0108 21:04:20.268640 124694 kubeadm.go:317] [ERROR Port-2380]: Port 2380 is in use
I0108 21:04:20.268767 124694 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
I0108 21:04:20.268860 124694 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
I0108 21:04:20.268932 124694 kubeadm.go:398] StartCluster complete in 4m20.046785929s
I0108 21:04:20.268974 124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0108 21:04:20.269027 124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0108 21:04:20.291757 124694 cri.go:87] found id: ""
I0108 21:04:20.291784 124694 logs.go:274] 0 containers: []
W0108 21:04:20.291794 124694 logs.go:276] No container was found matching "kube-apiserver"
I0108 21:04:20.291800 124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0108 21:04:20.291843 124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0108 21:04:20.314092 124694 cri.go:87] found id: ""
I0108 21:04:20.314115 124694 logs.go:274] 0 containers: []
W0108 21:04:20.314121 124694 logs.go:276] No container was found matching "etcd"
I0108 21:04:20.314127 124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0108 21:04:20.314165 124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0108 21:04:20.336438 124694 cri.go:87] found id: ""
I0108 21:04:20.336466 124694 logs.go:274] 0 containers: []
W0108 21:04:20.336476 124694 logs.go:276] No container was found matching "coredns"
I0108 21:04:20.336485 124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0108 21:04:20.336531 124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0108 21:04:20.360386 124694 cri.go:87] found id: ""
I0108 21:04:20.360419 124694 logs.go:274] 0 containers: []
W0108 21:04:20.360428 124694 logs.go:276] No container was found matching "kube-scheduler"
I0108 21:04:20.360436 124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0108 21:04:20.360477 124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0108 21:04:20.384216 124694 cri.go:87] found id: ""
I0108 21:04:20.384244 124694 logs.go:274] 0 containers: []
W0108 21:04:20.384251 124694 logs.go:276] No container was found matching "kube-proxy"
I0108 21:04:20.384259 124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0108 21:04:20.384307 124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0108 21:04:20.407359 124694 cri.go:87] found id: ""
I0108 21:04:20.407385 124694 logs.go:274] 0 containers: []
W0108 21:04:20.407391 124694 logs.go:276] No container was found matching "kubernetes-dashboard"
I0108 21:04:20.407397 124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0108 21:04:20.407446 124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0108 21:04:20.429513 124694 cri.go:87] found id: ""
I0108 21:04:20.429538 124694 logs.go:274] 0 containers: []
W0108 21:04:20.429547 124694 logs.go:276] No container was found matching "storage-provisioner"
I0108 21:04:20.429554 124694 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0108 21:04:20.429592 124694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0108 21:04:20.452750 124694 cri.go:87] found id: ""
I0108 21:04:20.452771 124694 logs.go:274] 0 containers: []
W0108 21:04:20.452777 124694 logs.go:276] No container was found matching "kube-controller-manager"
I0108 21:04:20.452786 124694 logs.go:123] Gathering logs for kubelet ...
I0108 21:04:20.452797 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0108 21:04:20.510605 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.937893 4359 projected.go:192] Error preparing data for projected volume kube-api-access-svv2t for pod kube-system/kube-proxy-wmrz2: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.511028 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.937978 4359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35e9935b-759b-4c18-9d0b-2c0daaab9a1e-kube-api-access-svv2t podName:35e9935b-759b-4c18-9d0b-2c0daaab9a1e nodeName:}" failed. No retries permitted until 2023-01-08 21:00:11.937956077 +0000 UTC m=+9.792765068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-svv2t" (UniqueName: "kubernetes.io/projected/35e9935b-759b-4c18-9d0b-2c0daaab9a1e-kube-api-access-svv2t") pod "kube-proxy-wmrz2" (UID: "35e9935b-759b-4c18-9d0b-2c0daaab9a1e") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.511172 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: W0108 21:00:09.938038 4359 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.511334 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938056 4359 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.511496 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: W0108 21:00:09.938110 4359 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.511664 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938117 4359 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.511857 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938151 4359 projected.go:192] Error preparing data for projected volume kube-api-access-wvwgn for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.512266 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938177 4359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdbd16cd-b53b-4309-ad17-7915a6d7b693-kube-api-access-wvwgn podName:bdbd16cd-b53b-4309-ad17-7915a6d7b693 nodeName:}" failed. No retries permitted until 2023-01-08 21:00:10.938168618 +0000 UTC m=+8.792977602 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wvwgn" (UniqueName: "kubernetes.io/projected/bdbd16cd-b53b-4309-ad17-7915a6d7b693-kube-api-access-wvwgn") pod "storage-provisioner" (UID: "bdbd16cd-b53b-4309-ad17-7915a6d7b693") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.512442 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938217 4359 projected.go:192] Error preparing data for projected volume kube-api-access-s5nz9 for pod kube-system/kindnet-mtvg5: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.512847 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938249 4359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1257f157-44a7-41fe-9d98-48b85ce53a40-kube-api-access-s5nz9 podName:1257f157-44a7-41fe-9d98-48b85ce53a40 nodeName:}" failed. No retries permitted until 2023-01-08 21:00:10.938238341 +0000 UTC m=+8.793047329 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s5nz9" (UniqueName: "kubernetes.io/projected/1257f157-44a7-41fe-9d98-48b85ce53a40-kube-api-access-s5nz9") pod "kindnet-mtvg5" (UID: "1257f157-44a7-41fe-9d98-48b85ce53a40") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.513031 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938309 4359 projected.go:192] Error preparing data for projected volume kube-api-access-9t8jr for pod kube-system/coredns-6d4b75cb6d-48vmf: failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.513475 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938332 4359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d43c5f88-44b8-4ab6-bc5b-f2883eda56e2-kube-api-access-9t8jr podName:d43c5f88-44b8-4ab6-bc5b-f2883eda56e2 nodeName:}" failed. No retries permitted until 2023-01-08 21:00:10.938325487 +0000 UTC m=+8.793134472 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9t8jr" (UniqueName: "kubernetes.io/projected/d43c5f88-44b8-4ab6-bc5b-f2883eda56e2-kube-api-access-9t8jr") pod "coredns-6d4b75cb6d-48vmf" (UID: "d43c5f88-44b8-4ab6-bc5b-f2883eda56e2") : failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.513628 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: W0108 21:00:09.938363 4359 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
W0108 21:04:20.513802 124694 logs.go:138] Found kubelet problem: Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.938372 4359 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
I0108 21:04:20.534040 124694 logs.go:123] Gathering logs for dmesg ...
I0108 21:04:20.534063 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0108 21:04:20.547468 124694 logs.go:123] Gathering logs for describe nodes ...
I0108 21:04:20.547515 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0108 21:04:20.836897 124694 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0108 21:04:20.836920 124694 logs.go:123] Gathering logs for containerd ...
I0108 21:04:20.836933 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0108 21:04:20.891961 124694 logs.go:123] Gathering logs for container status ...
I0108 21:04:20.891999 124694 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0108 21:04:20.917568 124694 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1025-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W0108 21:04:20.173147 6979 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W0108 21:04:20.917600 124694 out.go:239] *
W0108 21:04:20.917764 124694 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1025-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W0108 21:04:20.173147 6979 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W0108 21:04:20.917788 124694 out.go:239] *
W0108 21:04:20.918668 124694 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0108 21:04:20.921286 124694 out.go:177] X Problems detected in kubelet:
I0108 21:04:20.922717 124694 out.go:177] Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.937893 4359 projected.go:192] Error preparing data for projected volume kube-api-access-svv2t for pod kube-system/kube-proxy-wmrz2: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
I0108 21:04:20.925364 124694 out.go:177] Jan 08 21:00:10 test-preload-205820 kubelet[4359]: E0108 21:00:09.937978 4359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35e9935b-759b-4c18-9d0b-2c0daaab9a1e-kube-api-access-svv2t podName:35e9935b-759b-4c18-9d0b-2c0daaab9a1e nodeName:}" failed. No retries permitted until 2023-01-08 21:00:11.937956077 +0000 UTC m=+9.792765068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-svv2t" (UniqueName: "kubernetes.io/projected/35e9935b-759b-4c18-9d0b-2c0daaab9a1e-kube-api-access-svv2t") pod "kube-proxy-wmrz2" (UID: "35e9935b-759b-4c18-9d0b-2c0daaab9a1e") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-205820" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
I0108 21:04:20.926971 124694 out.go:177] Jan 08 21:00:10 test-preload-205820 kubelet[4359]: W0108 21:00:09.938038 4359 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-205820" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-205820' and this object
I0108 21:04:20.929431 124694 out.go:177]
W0108 21:04:20.930937 124694 out.go:239] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1025-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W0108 21:04:20.173147 6979 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1025-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W0108 21:04:20.931018 124694 out.go:239] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
W0108 21:04:20.931068 124694 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/5484
I0108 21:04:20.932735 124694 out.go:177]
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
*
* ==> containerd <==
* -- Logs begin at Sun 2023-01-08 20:58:22 UTC, end at Sun 2023-01-08 21:04:21 UTC. --
Jan 08 21:04:19 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:19.927436574Z" level=error msg="StopPodSandbox for \"\\\"Using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"\\\"Using\": not found"
Jan 08 21:04:19 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:19.943218825Z" level=info msg="StopPodSandbox for \"this\""
Jan 08 21:04:19 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:19.943264656Z" level=error msg="StopPodSandbox for \"this\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"this\": not found"
Jan 08 21:04:19 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:19.959410165Z" level=info msg="StopPodSandbox for \"endpoint\""
Jan 08 21:04:19 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:19.959456780Z" level=error msg="StopPodSandbox for \"endpoint\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint\": not found"
Jan 08 21:04:19 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:19.976523539Z" level=info msg="StopPodSandbox for \"is\""
Jan 08 21:04:19 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:19.976573063Z" level=error msg="StopPodSandbox for \"is\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"is\": not found"
Jan 08 21:04:19 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:19.992515921Z" level=info msg="StopPodSandbox for \"deprecated,\""
Jan 08 21:04:19 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:19.992564379Z" level=error msg="StopPodSandbox for \"deprecated,\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"deprecated,\": not found"
Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.008857888Z" level=info msg="StopPodSandbox for \"please\""
Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.008907023Z" level=error msg="StopPodSandbox for \"please\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"please\": not found"
Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.025453298Z" level=info msg="StopPodSandbox for \"consider\""
Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.025506712Z" level=error msg="StopPodSandbox for \"consider\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"consider\": not found"
Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.040762943Z" level=info msg="StopPodSandbox for \"using\""
Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.040804963Z" level=error msg="StopPodSandbox for \"using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"using\": not found"
Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.057148884Z" level=info msg="StopPodSandbox for \"full\""
Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.057195124Z" level=error msg="StopPodSandbox for \"full\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"full\": not found"
Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.073055648Z" level=info msg="StopPodSandbox for \"URL\""
Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.073099827Z" level=error msg="StopPodSandbox for \"URL\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL\": not found"
Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.089148856Z" level=info msg="StopPodSandbox for \"format\\\"\""
Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.089197996Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.105182966Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.105229329Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.121419409Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
Jan 08 21:04:20 test-preload-205820 containerd[3061]: time="2023-01-08T21:04:20.121466475Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
*
* ==> describe nodes <==
*
* ==> dmesg <==
* [ +0.008448] FS-Cache: Duplicate cookie detected
[ +0.005292] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
[ +0.006738] FS-Cache: O-cookie d=00000000b351f190{9p.inode} n=00000000b94a5e01
[ +0.008741] FS-Cache: O-key=[8] '8ea00f0200000000'
[ +0.006286] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
[ +0.007953] FS-Cache: N-cookie d=00000000b351f190{9p.inode} n=000000008bdebc64
[ +0.008734] FS-Cache: N-key=[8] '8ea00f0200000000'
[ +3.644617] FS-Cache: Duplicate cookie detected
[ +0.004692] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
[ +0.006733] FS-Cache: O-cookie d=00000000b351f190{9p.inode} n=000000002647edbf
[ +0.007353] FS-Cache: O-key=[8] '8da00f0200000000'
[ +0.004933] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
[ +0.006615] FS-Cache: N-cookie d=00000000b351f190{9p.inode} n=000000002ffef31b
[ +0.008707] FS-Cache: N-key=[8] '8da00f0200000000'
[ +0.360206] FS-Cache: Duplicate cookie detected
[ +0.004682] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
[ +0.006745] FS-Cache: O-cookie d=00000000b351f190{9p.inode} n=00000000ca95e3ed
[ +0.007364] FS-Cache: O-key=[8] '98a00f0200000000'
[ +0.005138] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
[ +0.007934] FS-Cache: N-cookie d=00000000b351f190{9p.inode} n=000000009a7f623e
[ +0.008739] FS-Cache: N-key=[8] '98a00f0200000000'
[Jan 8 20:44] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Jan 8 21:00] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000386] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.011260] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
*
* ==> kernel <==
* 21:04:21 up 46 min, 0 users, load average: 0.32, 0.59, 0.69
Linux test-preload-205820 5.15.0-1025-gcp #32~20.04.2-Ubuntu SMP Tue Nov 29 08:31:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"
*
* ==> kubelet <==
* -- Logs begin at Sun 2023-01-08 20:58:22 UTC, end at Sun 2023-01-08 21:04:22 UTC. --
Jan 08 21:02:52 test-preload-205820 kubelet[4359]: I0108 21:02:52.350590 4359 scope.go:110] "RemoveContainer" containerID="331e6cfcf9c146cb0bb87ed8961668f3b1301b48f3d6c4fe14f75657e855c72c"
Jan 08 21:02:52 test-preload-205820 kubelet[4359]: I0108 21:02:52.799370 4359 scope.go:110] "RemoveContainer" containerID="331e6cfcf9c146cb0bb87ed8961668f3b1301b48f3d6c4fe14f75657e855c72c"
Jan 08 21:02:52 test-preload-205820 kubelet[4359]: I0108 21:02:52.799742 4359 scope.go:110] "RemoveContainer" containerID="83c96f78dfef18e3ea969ce2f2c7a5920a2fb03fd2161a0d3193c273738702ba"
Jan 08 21:02:52 test-preload-205820 kubelet[4359]: E0108 21:02:52.800239 4359 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-205820_kube-system(0106aa4904eaf95a3dcc4972da83cce0)\"" pod="kube-system/etcd-test-preload-205820" podUID=0106aa4904eaf95a3dcc4972da83cce0
Jan 08 21:02:58 test-preload-205820 kubelet[4359]: I0108 21:02:58.243657 4359 scope.go:110] "RemoveContainer" containerID="83c96f78dfef18e3ea969ce2f2c7a5920a2fb03fd2161a0d3193c273738702ba"
Jan 08 21:02:58 test-preload-205820 kubelet[4359]: E0108 21:02:58.244202 4359 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-205820_kube-system(0106aa4904eaf95a3dcc4972da83cce0)\"" pod="kube-system/etcd-test-preload-205820" podUID=0106aa4904eaf95a3dcc4972da83cce0
Jan 08 21:02:58 test-preload-205820 kubelet[4359]: I0108 21:02:58.812972 4359 scope.go:110] "RemoveContainer" containerID="83c96f78dfef18e3ea969ce2f2c7a5920a2fb03fd2161a0d3193c273738702ba"
Jan 08 21:02:58 test-preload-205820 kubelet[4359]: E0108 21:02:58.813284 4359 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-205820_kube-system(0106aa4904eaf95a3dcc4972da83cce0)\"" pod="kube-system/etcd-test-preload-205820" podUID=0106aa4904eaf95a3dcc4972da83cce0
Jan 08 21:02:59 test-preload-205820 kubelet[4359]: I0108 21:02:59.814359 4359 scope.go:110] "RemoveContainer" containerID="83c96f78dfef18e3ea969ce2f2c7a5920a2fb03fd2161a0d3193c273738702ba"
Jan 08 21:02:59 test-preload-205820 kubelet[4359]: E0108 21:02:59.814663 4359 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-205820_kube-system(0106aa4904eaf95a3dcc4972da83cce0)\"" pod="kube-system/etcd-test-preload-205820" podUID=0106aa4904eaf95a3dcc4972da83cce0
Jan 08 21:03:11 test-preload-205820 kubelet[4359]: I0108 21:03:11.350653 4359 scope.go:110] "RemoveContainer" containerID="83c96f78dfef18e3ea969ce2f2c7a5920a2fb03fd2161a0d3193c273738702ba"
Jan 08 21:03:11 test-preload-205820 kubelet[4359]: E0108 21:03:11.350978 4359 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-205820_kube-system(0106aa4904eaf95a3dcc4972da83cce0)\"" pod="kube-system/etcd-test-preload-205820" podUID=0106aa4904eaf95a3dcc4972da83cce0
Jan 08 21:03:24 test-preload-205820 kubelet[4359]: I0108 21:03:24.350602 4359 scope.go:110] "RemoveContainer" containerID="83c96f78dfef18e3ea969ce2f2c7a5920a2fb03fd2161a0d3193c273738702ba"
Jan 08 21:03:24 test-preload-205820 kubelet[4359]: E0108 21:03:24.351149 4359 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-205820_kube-system(0106aa4904eaf95a3dcc4972da83cce0)\"" pod="kube-system/etcd-test-preload-205820" podUID=0106aa4904eaf95a3dcc4972da83cce0
Jan 08 21:03:35 test-preload-205820 kubelet[4359]: I0108 21:03:35.350448 4359 scope.go:110] "RemoveContainer" containerID="83c96f78dfef18e3ea969ce2f2c7a5920a2fb03fd2161a0d3193c273738702ba"
Jan 08 21:03:35 test-preload-205820 kubelet[4359]: E0108 21:03:35.351057 4359 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-205820_kube-system(0106aa4904eaf95a3dcc4972da83cce0)\"" pod="kube-system/etcd-test-preload-205820" podUID=0106aa4904eaf95a3dcc4972da83cce0
Jan 08 21:03:47 test-preload-205820 kubelet[4359]: I0108 21:03:47.350062 4359 scope.go:110] "RemoveContainer" containerID="83c96f78dfef18e3ea969ce2f2c7a5920a2fb03fd2161a0d3193c273738702ba"
Jan 08 21:03:47 test-preload-205820 kubelet[4359]: E0108 21:03:47.350424 4359 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-205820_kube-system(0106aa4904eaf95a3dcc4972da83cce0)\"" pod="kube-system/etcd-test-preload-205820" podUID=0106aa4904eaf95a3dcc4972da83cce0
Jan 08 21:04:01 test-preload-205820 kubelet[4359]: I0108 21:04:01.349900 4359 scope.go:110] "RemoveContainer" containerID="83c96f78dfef18e3ea969ce2f2c7a5920a2fb03fd2161a0d3193c273738702ba"
Jan 08 21:04:01 test-preload-205820 kubelet[4359]: E0108 21:04:01.350244 4359 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-205820_kube-system(0106aa4904eaf95a3dcc4972da83cce0)\"" pod="kube-system/etcd-test-preload-205820" podUID=0106aa4904eaf95a3dcc4972da83cce0
Jan 08 21:04:12 test-preload-205820 kubelet[4359]: I0108 21:04:12.350774 4359 scope.go:110] "RemoveContainer" containerID="83c96f78dfef18e3ea969ce2f2c7a5920a2fb03fd2161a0d3193c273738702ba"
Jan 08 21:04:12 test-preload-205820 kubelet[4359]: E0108 21:04:12.351100 4359 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-205820_kube-system(0106aa4904eaf95a3dcc4972da83cce0)\"" pod="kube-system/etcd-test-preload-205820" podUID=0106aa4904eaf95a3dcc4972da83cce0
Jan 08 21:04:18 test-preload-205820 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
Jan 08 21:04:18 test-preload-205820 systemd[1]: kubelet.service: Succeeded.
Jan 08 21:04:18 test-preload-205820 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
-- /stdout --
** stderr **
E0108 21:04:21.980071 129545 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
! unable to fetch logs for: describe nodes
** /stderr **
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-205820 -n test-preload-205820
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-205820 -n test-preload-205820: exit status 2 (341.899606ms)
-- stdout --
Stopped
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "test-preload-205820" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-205820" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p test-preload-205820
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-205820: (2.038663381s)
--- FAIL: TestPreload (364.34s)