=== RUN TestPreload
preload_test.go:44: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-230809 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-230809 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.24.4: (50.604250072s)
preload_test.go:57: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-230809 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-230809 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.711754438s)
preload_test.go:67: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-230809 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --container-runtime=containerd --kubernetes-version=v1.24.6
E1101 23:09:22.269750 12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
E1101 23:09:42.407261 12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/ingress-addon-legacy-225316/client.crt: no such file or directory
E1101 23:12:32.185694 12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
E1101 23:12:59.224603 12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/addons-224533/client.crt: no such file or directory
E1101 23:13:55.363507 12840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/functional-225030/client.crt: no such file or directory
preload_test.go:67: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p test-preload-230809 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --container-runtime=containerd --kubernetes-version=v1.24.6: exit status 81 (5m0.539248776s)
-- stdout --
* [test-preload-230809] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=15232
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
* Using the docker driver based on existing profile
* Starting control plane node test-preload-230809 in cluster test-preload-230809
* Pulling base image ...
* Downloading Kubernetes v1.24.6 preload ...
* Updating the running docker "test-preload-230809" container ...
* Preparing Kubernetes v1.24.6 on containerd 1.6.9 ...
* Configuring CNI (Container Networking Interface) ...
X Problems detected in kubelet:
Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.121441 4572 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
Nov 01 23:09:52 test-preload-230809 kubelet[4572]: E1101 23:09:52.121486 4572 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.134778 4572 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
-- /stdout --
** stderr **
I1101 23:09:02.101256 127145 out.go:296] Setting OutFile to fd 1 ...
I1101 23:09:02.101369 127145 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 23:09:02.101380 127145 out.go:309] Setting ErrFile to fd 2...
I1101 23:09:02.101385 127145 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 23:09:02.101473 127145 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-6112/.minikube/bin
I1101 23:09:02.101987 127145 out.go:303] Setting JSON to false
I1101 23:09:02.102936 127145 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3088,"bootTime":1667341054,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1101 23:09:02.102996 127145 start.go:126] virtualization: kvm guest
I1101 23:09:02.105803 127145 out.go:177] * [test-preload-230809] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
I1101 23:09:02.107347 127145 notify.go:220] Checking for updates...
I1101 23:09:02.108879 127145 out.go:177] - MINIKUBE_LOCATION=15232
I1101 23:09:02.110538 127145 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1101 23:09:02.112123 127145 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig
I1101 23:09:02.113662 127145 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
I1101 23:09:02.115184 127145 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1101 23:09:02.116881 127145 config.go:180] Loaded profile config "test-preload-230809": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I1101 23:09:02.118764 127145 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
I1101 23:09:02.120144 127145 driver.go:365] Setting default libvirt URI to qemu:///system
I1101 23:09:02.148923 127145 docker.go:137] docker version: linux-20.10.21
I1101 23:09:02.149004 127145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1101 23:09:02.241848 127145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:39 SystemTime:2022-11-01 23:09:02.16794253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1101 23:09:02.241979 127145 docker.go:254] overlay module found
I1101 23:09:02.245118 127145 out.go:177] * Using the docker driver based on existing profile
I1101 23:09:02.246572 127145 start.go:282] selected driver: docker
I1101 23:09:02.246590 127145 start.go:808] validating driver "docker" against &{Name:test-preload-230809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-230809 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1101 23:09:02.246667 127145 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1101 23:09:02.247466 127145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1101 23:09:02.338554 127145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:39 SystemTime:2022-11-01 23:09:02.266470239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1101 23:09:02.338791 127145 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1101 23:09:02.338813 127145 cni.go:95] Creating CNI manager for ""
I1101 23:09:02.338820 127145 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I1101 23:09:02.338831 127145 start_flags.go:317] config:
{Name:test-preload-230809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-230809 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1101 23:09:02.341335 127145 out.go:177] * Starting control plane node test-preload-230809 in cluster test-preload-230809
I1101 23:09:02.342819 127145 cache.go:120] Beginning downloading kic base image for docker with containerd
I1101 23:09:02.344289 127145 out.go:177] * Pulling base image ...
I1101 23:09:02.345773 127145 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1101 23:09:02.345854 127145 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
I1101 23:09:02.367470 127145 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
I1101 23:09:02.367494 127145 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
I1101 23:09:02.456956 127145 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
I1101 23:09:02.456979 127145 cache.go:57] Caching tarball of preloaded images
I1101 23:09:02.457299 127145 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1101 23:09:02.459387 127145 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
I1101 23:09:02.460985 127145 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1101 23:09:02.574127 127145 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
I1101 23:09:07.458996 127145 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1101 23:09:07.459100 127145 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1101 23:09:08.389256 127145 cache.go:60] Finished verifying existence of preloaded tar for v1.24.6 on containerd
I1101 23:09:08.389384 127145 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/config.json ...
I1101 23:09:08.389578 127145 cache.go:208] Successfully downloaded all kic artifacts
I1101 23:09:08.389617 127145 start.go:364] acquiring machines lock for test-preload-230809: {Name:mke051021b2965b04875f4fe9250ee1fc48098e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1101 23:09:08.389726 127145 start.go:368] acquired machines lock for "test-preload-230809" in 76.094µs
I1101 23:09:08.389751 127145 start.go:96] Skipping create...Using existing machine configuration
I1101 23:09:08.389762 127145 fix.go:55] fixHost starting:
I1101 23:09:08.390003 127145 cli_runner.go:164] Run: docker container inspect test-preload-230809 --format={{.State.Status}}
I1101 23:09:08.411982 127145 fix.go:103] recreateIfNeeded on test-preload-230809: state=Running err=<nil>
W1101 23:09:08.412027 127145 fix.go:129] unexpected machine state, will restart: <nil>
I1101 23:09:08.414797 127145 out.go:177] * Updating the running docker "test-preload-230809" container ...
I1101 23:09:08.416264 127145 machine.go:88] provisioning docker machine ...
I1101 23:09:08.416295 127145 ubuntu.go:169] provisioning hostname "test-preload-230809"
I1101 23:09:08.416338 127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
I1101 23:09:08.439734 127145 main.go:134] libmachine: Using SSH client type: native
I1101 23:09:08.440024 127145 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49277 <nil> <nil>}
I1101 23:09:08.440069 127145 main.go:134] libmachine: About to run SSH command:
sudo hostname test-preload-230809 && echo "test-preload-230809" | sudo tee /etc/hostname
I1101 23:09:08.562938 127145 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-230809
I1101 23:09:08.563010 127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
I1101 23:09:08.585385 127145 main.go:134] libmachine: Using SSH client type: native
I1101 23:09:08.585561 127145 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49277 <nil> <nil>}
I1101 23:09:08.585590 127145 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\stest-preload-230809' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-230809/g' /etc/hosts;
else
echo '127.0.1.1 test-preload-230809' | sudo tee -a /etc/hosts;
fi
fi
I1101 23:09:08.698901 127145 main.go:134] libmachine: SSH cmd err, output: <nil>:
I1101 23:09:08.698934 127145 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15232-6112/.minikube CaCertPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15232-6112/.minikube}
I1101 23:09:08.698966 127145 ubuntu.go:177] setting up certificates
I1101 23:09:08.698978 127145 provision.go:83] configureAuth start
I1101 23:09:08.699037 127145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-230809
I1101 23:09:08.721518 127145 provision.go:138] copyHostCerts
I1101 23:09:08.721585 127145 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-6112/.minikube/ca.pem, removing ...
I1101 23:09:08.721599 127145 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-6112/.minikube/ca.pem
I1101 23:09:08.721689 127145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15232-6112/.minikube/ca.pem (1078 bytes)
I1101 23:09:08.721805 127145 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-6112/.minikube/cert.pem, removing ...
I1101 23:09:08.721820 127145 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-6112/.minikube/cert.pem
I1101 23:09:08.721860 127145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15232-6112/.minikube/cert.pem (1123 bytes)
I1101 23:09:08.721933 127145 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-6112/.minikube/key.pem, removing ...
I1101 23:09:08.721947 127145 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-6112/.minikube/key.pem
I1101 23:09:08.721984 127145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15232-6112/.minikube/key.pem (1675 bytes)
I1101 23:09:08.722065 127145 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15232-6112/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca-key.pem org=jenkins.test-preload-230809 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-230809]
I1101 23:09:09.342668 127145 provision.go:172] copyRemoteCerts
I1101 23:09:09.342737 127145 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1101 23:09:09.342788 127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
I1101 23:09:09.365869 127145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/test-preload-230809/id_rsa Username:docker}
I1101 23:09:09.450803 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1101 23:09:09.467332 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I1101 23:09:09.484069 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1101 23:09:09.500288 127145 provision.go:86] duration metric: configureAuth took 801.291693ms
I1101 23:09:09.500314 127145 ubuntu.go:193] setting minikube options for container-runtime
I1101 23:09:09.500489 127145 config.go:180] Loaded profile config "test-preload-230809": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
I1101 23:09:09.500504 127145 machine.go:91] provisioned docker machine in 1.084227489s
I1101 23:09:09.500512 127145 start.go:300] post-start starting for "test-preload-230809" (driver="docker")
I1101 23:09:09.500518 127145 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1101 23:09:09.500574 127145 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1101 23:09:09.500612 127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
I1101 23:09:09.523524 127145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/test-preload-230809/id_rsa Username:docker}
I1101 23:09:09.606420 127145 ssh_runner.go:195] Run: cat /etc/os-release
I1101 23:09:09.608955 127145 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1101 23:09:09.608997 127145 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1101 23:09:09.609008 127145 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1101 23:09:09.609014 127145 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I1101 23:09:09.609026 127145 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-6112/.minikube/addons for local assets ...
I1101 23:09:09.609074 127145 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-6112/.minikube/files for local assets ...
I1101 23:09:09.609141 127145 filesync.go:149] local asset: /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem -> 128402.pem in /etc/ssl/certs
I1101 23:09:09.609211 127145 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1101 23:09:09.615422 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem --> /etc/ssl/certs/128402.pem (1708 bytes)
I1101 23:09:09.632348 127145 start.go:303] post-start completed in 131.826095ms
I1101 23:09:09.632431 127145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1101 23:09:09.632484 127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
I1101 23:09:09.655572 127145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/test-preload-230809/id_rsa Username:docker}
I1101 23:09:09.739833 127145 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1101 23:09:09.743685 127145 fix.go:57] fixHost completed within 1.353918347s
I1101 23:09:09.743711 127145 start.go:83] releasing machines lock for "test-preload-230809", held for 1.353965858s
I1101 23:09:09.743793 127145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-230809
I1101 23:09:09.766548 127145 ssh_runner.go:195] Run: systemctl --version
I1101 23:09:09.766597 127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
I1101 23:09:09.766663 127145 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I1101 23:09:09.766716 127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
I1101 23:09:09.792264 127145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/test-preload-230809/id_rsa Username:docker}
I1101 23:09:09.792322 127145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/test-preload-230809/id_rsa Username:docker}
I1101 23:09:09.888741 127145 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1101 23:09:09.898412 127145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1101 23:09:09.907129 127145 docker.go:189] disabling docker service ...
I1101 23:09:09.907178 127145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1101 23:09:09.916127 127145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1101 23:09:09.924535 127145 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1101 23:09:10.021637 127145 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1101 23:09:10.121893 127145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1101 23:09:10.130949 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1101 23:09:10.143348 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
I1101 23:09:10.150803 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
I1101 23:09:10.158084 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
I1101 23:09:10.165427 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
I1101 23:09:10.172620 127145 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1101 23:09:10.178500 127145 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1101 23:09:10.184228 127145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 23:09:10.274591 127145 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1101 23:09:10.352393 127145 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
I1101 23:09:10.352463 127145 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1101 23:09:10.357122 127145 start.go:472] Will wait 60s for crictl version
I1101 23:09:10.357191 127145 ssh_runner.go:195] Run: sudo crictl version
I1101 23:09:10.392488 127145 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:
stderr:
time="2022-11-01T23:09:10Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
I1101 23:09:21.439528 127145 ssh_runner.go:195] Run: sudo crictl version
I1101 23:09:21.462449 127145 start.go:481] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.6.9
RuntimeApiVersion: v1alpha2
I1101 23:09:21.462510 127145 ssh_runner.go:195] Run: containerd --version
I1101 23:09:21.484971 127145 ssh_runner.go:195] Run: containerd --version
I1101 23:09:21.509013 127145 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.9 ...
I1101 23:09:21.510580 127145 cli_runner.go:164] Run: docker network inspect test-preload-230809 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1101 23:09:21.532621 127145 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I1101 23:09:21.536061 127145 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1101 23:09:21.536135 127145 ssh_runner.go:195] Run: sudo crictl images --output json
I1101 23:09:21.558771 127145 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
I1101 23:09:21.558833 127145 ssh_runner.go:195] Run: which lz4
I1101 23:09:21.561739 127145 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1101 23:09:21.564671 127145 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I1101 23:09:21.564695 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
I1101 23:09:22.512481 127145 containerd.go:496] Took 0.950765 seconds to copy over tarball
I1101 23:09:22.512539 127145 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I1101 23:09:25.309553 127145 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.796992099s)
I1101 23:09:25.309668 127145 containerd.go:503] Took 2.797150 seconds t extract the tarball
I1101 23:09:25.309687 127145 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1101 23:09:25.324395 127145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 23:09:25.422371 127145 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1101 23:09:25.510170 127145 ssh_runner.go:195] Run: sudo crictl images --output json
I1101 23:09:25.538232 127145 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.6 k8s.gcr.io/kube-controller-manager:v1.24.6 k8s.gcr.io/kube-scheduler:v1.24.6 k8s.gcr.io/kube-proxy:v1.24.6 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
I1101 23:09:25.538307 127145 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I1101 23:09:25.538343 127145 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.6
I1101 23:09:25.538380 127145 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.6
I1101 23:09:25.538401 127145 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.6
I1101 23:09:25.538410 127145 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
I1101 23:09:25.538365 127145 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
I1101 23:09:25.538347 127145 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
I1101 23:09:25.538380 127145 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.6
I1101 23:09:25.539377 127145 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I1101 23:09:25.539486 127145 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.6: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.6
I1101 23:09:25.539520 127145 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.6: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.6
I1101 23:09:25.539552 127145 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.6: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.6
I1101 23:09:25.539747 127145 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
I1101 23:09:25.540025 127145 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
I1101 23:09:25.540223 127145 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.6: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.6
I1101 23:09:25.540448 127145 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
I1101 23:09:25.987285 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
I1101 23:09:25.999857 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
I1101 23:09:26.002925 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.6"
I1101 23:09:26.009305 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.6"
I1101 23:09:26.050246 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
I1101 23:09:26.065466 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6"
I1101 23:09:26.075511 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6"
I1101 23:09:26.363138 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
I1101 23:09:26.825611 127145 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
I1101 23:09:26.825704 127145 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
I1101 23:09:26.825763 127145 ssh_runner.go:195] Run: which crictl
I1101 23:09:26.922091 127145 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
I1101 23:09:26.922201 127145 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
I1101 23:09:26.922266 127145 ssh_runner.go:195] Run: which crictl
I1101 23:09:26.935023 127145 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.6" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.6" does not exist at hash "860f263331c9513ddab44d4d8a9a4a7304313b3aa0776decc1d7fc6acdd69ab0" in container runtime
I1101 23:09:26.935049 127145 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.6" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.6" does not exist at hash "0bb39497ab33bb5f8aaff88ced53a5fcd360fcf5da647609619d4f5c8f1483d2" in container runtime
I1101 23:09:26.935073 127145 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.6
I1101 23:09:26.935157 127145 ssh_runner.go:195] Run: which crictl
I1101 23:09:26.935073 127145 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.6
I1101 23:09:26.935237 127145 ssh_runner.go:195] Run: which crictl
I1101 23:09:27.033281 127145 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
I1101 23:09:27.033386 127145 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
I1101 23:09:27.033448 127145 ssh_runner.go:195] Run: which crictl
I1101 23:09:27.118607 127145 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6": (1.053106276s)
I1101 23:09:27.197931 127145 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.6" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.6" does not exist at hash "c786c777a4e1c21907e77042428837645fa382d3bd14925cf78f0d163d6d332e" in container runtime
I1101 23:09:27.118727 127145 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6": (1.043182812s)
I1101 23:09:27.145553 127145 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
I1101 23:09:27.198012 127145 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.6
I1101 23:09:27.198041 127145 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I1101 23:09:27.198067 127145 ssh_runner.go:195] Run: which crictl
I1101 23:09:27.198114 127145 ssh_runner.go:195] Run: which crictl
I1101 23:09:27.145664 127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
I1101 23:09:27.145702 127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
I1101 23:09:27.145736 127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6
I1101 23:09:27.145736 127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6
I1101 23:09:27.145776 127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
I1101 23:09:27.197981 127145 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.6" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.6" does not exist at hash "c6c20157a42337ecb7675be59e1dc34bc5a91288c7eeac1e30ec97767a9055eb" in container runtime
I1101 23:09:27.198282 127145 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.6
I1101 23:09:27.198319 127145 ssh_runner.go:195] Run: which crictl
I1101 23:09:28.633346 127145 ssh_runner.go:235] Completed: which crictl: (1.435002706s)
I1101 23:09:28.633407 127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6
I1101 23:09:28.633499 127145 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6: (1.435244347s)
I1101 23:09:28.633520 127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6
I1101 23:09:28.633558 127145 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0: (1.435295917s)
I1101 23:09:28.633570 127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
I1101 23:09:28.633630 127145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
I1101 23:09:28.633718 127145 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6: (1.435492576s)
I1101 23:09:28.633737 127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
I1101 23:09:28.633801 127145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
I1101 23:09:28.633883 127145 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6: (1.435647522s)
I1101 23:09:28.633895 127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6
I1101 23:09:28.633934 127145 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7: (1.43573031s)
I1101 23:09:28.633961 127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
I1101 23:09:28.633997 127145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
I1101 23:09:28.634036 127145 ssh_runner.go:235] Completed: which crictl: (1.435871833s)
I1101 23:09:28.634053 127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1101 23:09:28.634098 127145 ssh_runner.go:235] Completed: which crictl: (1.436023391s)
I1101 23:09:28.634122 127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6
I1101 23:09:28.778449 127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
I1101 23:09:28.778478 127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6
I1101 23:09:28.778546 127145 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
I1101 23:09:28.778569 127145 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
I1101 23:09:28.778584 127145 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
I1101 23:09:28.778593 127145 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
I1101 23:09:28.778618 127145 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
I1101 23:09:28.778652 127145 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
I1101 23:09:28.779903 127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6
I1101 23:09:28.781996 127145 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
I1101 23:09:36.182104 127145 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (7.403463536s)
I1101 23:09:36.182144 127145 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
I1101 23:09:36.182176 127145 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.8.6
I1101 23:09:36.182237 127145 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
I1101 23:09:38.315093 127145 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6: (2.132819455s)
I1101 23:09:38.315128 127145 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
I1101 23:09:38.315167 127145 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.7
I1101 23:09:38.315245 127145 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
I1101 23:09:38.532314 127145 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
I1101 23:09:38.532357 127145 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I1101 23:09:38.532411 127145 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
I1101 23:09:39.739922 127145 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5: (1.207479048s)
I1101 23:09:39.739955 127145 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I1101 23:09:39.740004 127145 cache_images.go:92] LoadImages completed in 14.201748543s
W1101 23:09:39.740191 127145 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6: no such file or directory
X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6: no such file or directory
I1101 23:09:39.740259 127145 ssh_runner.go:195] Run: sudo crictl info
I1101 23:09:39.816714 127145 cni.go:95] Creating CNI manager for ""
I1101 23:09:39.816751 127145 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I1101 23:09:39.816770 127145 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1101 23:09:39.816787 127145 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-230809 NodeName:test-preload-230809 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I1101 23:09:39.816973 127145 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "test-preload-230809"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1101 23:09:39.817109 127145 kubeadm.go:962] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-230809 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.6 ClusterName:test-preload-230809 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1101 23:09:39.817179 127145 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
I1101 23:09:39.826621 127145 binaries.go:44] Found k8s binaries, skipping transfer
I1101 23:09:39.826677 127145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1101 23:09:39.835648 127145 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (512 bytes)
I1101 23:09:39.916772 127145 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1101 23:09:39.932259 127145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
I1101 23:09:39.947304 127145 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I1101 23:09:39.950835 127145 certs.go:54] Setting up /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809 for IP: 192.168.67.2
I1101 23:09:39.950959 127145 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15232-6112/.minikube/ca.key
I1101 23:09:39.951010 127145 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15232-6112/.minikube/proxy-client-ca.key
I1101 23:09:39.951103 127145 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/client.key
I1101 23:09:39.951220 127145 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/apiserver.key.c7fa3a9e
I1101 23:09:39.951278 127145 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/proxy-client.key
I1101 23:09:39.951418 127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/12840.pem (1338 bytes)
W1101 23:09:39.951461 127145 certs.go:384] ignoring /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/12840_empty.pem, impossibly tiny 0 bytes
I1101 23:09:39.951476 127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca-key.pem (1679 bytes)
I1101 23:09:39.951510 127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem (1078 bytes)
I1101 23:09:39.951551 127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem (1123 bytes)
I1101 23:09:39.951584 127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/key.pem (1675 bytes)
I1101 23:09:39.951640 127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem (1708 bytes)
I1101 23:09:39.952459 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1101 23:09:40.018330 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1101 23:09:40.038985 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1101 23:09:40.059337 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1101 23:09:40.127519 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1101 23:09:40.147768 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1101 23:09:40.216763 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1101 23:09:40.238171 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1101 23:09:40.265559 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/certs/12840.pem --> /usr/share/ca-certificates/12840.pem (1338 bytes)
I1101 23:09:40.332847 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem --> /usr/share/ca-certificates/128402.pem (1708 bytes)
I1101 23:09:40.354317 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1101 23:09:40.414264 127145 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1101 23:09:40.430591 127145 ssh_runner.go:195] Run: openssl version
I1101 23:09:40.436602 127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12840.pem && ln -fs /usr/share/ca-certificates/12840.pem /etc/ssl/certs/12840.pem"
I1101 23:09:40.445840 127145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12840.pem
I1101 23:09:40.449377 127145 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov 1 22:50 /usr/share/ca-certificates/12840.pem
I1101 23:09:40.449430 127145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12840.pem
I1101 23:09:40.456569 127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12840.pem /etc/ssl/certs/51391683.0"
I1101 23:09:40.464390 127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128402.pem && ln -fs /usr/share/ca-certificates/128402.pem /etc/ssl/certs/128402.pem"
I1101 23:09:40.514612 127145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128402.pem
I1101 23:09:40.518320 127145 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov 1 22:50 /usr/share/ca-certificates/128402.pem
I1101 23:09:40.518385 127145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128402.pem
I1101 23:09:40.524764 127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128402.pem /etc/ssl/certs/3ec20f2e.0"
I1101 23:09:40.533275 127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1101 23:09:40.542165 127145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1101 23:09:40.545871 127145 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov 1 22:46 /usr/share/ca-certificates/minikubeCA.pem
I1101 23:09:40.545917 127145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1101 23:09:40.550867 127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1101 23:09:40.558550 127145 kubeadm.go:396] StartCluster: {Name:test-preload-230809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-230809 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1101 23:09:40.558652 127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1101 23:09:40.558703 127145 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1101 23:09:40.637065 127145 cri.go:87] found id: "e6d03d027c4d1a3003df82cf9d4d3ecff8142ccbfb55e98924ae237bb4f1228c"
I1101 23:09:40.637096 127145 cri.go:87] found id: "514280d446c28719c4288d8933d306bc638eb138421419ac3e8c984443017720"
I1101 23:09:40.637108 127145 cri.go:87] found id: "afef2fbc253ad82a2dfd9afacb362da4a20ca08f1f9d6008377794501540f11a"
I1101 23:09:40.637121 127145 cri.go:87] found id: "dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8"
I1101 23:09:40.637131 127145 cri.go:87] found id: ""
I1101 23:09:40.637166 127145 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I1101 23:09:40.735629 127145 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"12f63aa1ca7d1ffe1d6116a2508352b1aa495e5ddea1b61f00d22cbd3da01cb5","pid":2624,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12f63aa1ca7d1ffe1d6116a2508352b1aa495e5ddea1b61f00d22cbd3da01cb5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12f63aa1ca7d1ffe1d6116a2508352b1aa495e5ddea1b61f00d22cbd3da01cb5/rootfs","created":"2022-11-01T23:08:58.356227997Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1","pid":2147,"st
atus":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1/rootfs","created":"2022-11-01T23:08:50.712751348Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-55wll_18a63bc3-b29d-45a5-98a8-3f37cfef3c7b","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-55wll","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424","pid":1508,"status":
"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424/rootfs","created":"2022-11-01T23:08:30.466593305Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-230809_37b967577315f9064699b525aec41d0d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62","pid":2189,"status"
:"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62/rootfs","created":"2022-11-01T23:08:50.775829242Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-mprfx_c323cc25-2fa6-4edf-b36c-03da66892a50","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-mprfx","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6d7033082e21d9b2838c0b0308241f7d2ab87fb855475429560ba43a20e96468","pid":1631,"status":"running","b
undle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d7033082e21d9b2838c0b0308241f7d2ab87fb855475429560ba43a20e96468","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d7033082e21d9b2838c0b0308241f7d2ab87fb855475429560ba43a20e96468/rootfs","created":"2022-11-01T23:08:30.715212813Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-scheduler:v1.24.4","io.kubernetes.cri.sandbox-id":"cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7ee8c6a3f397fe26e378812cf2775f1f109687544f2146de5fc5439dd7178994","pid":2246,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ee8c6a3f397fe26e378812cf2775f1f109687544f2146de5fc5439dd7178994","rootfs":"/run/containerd/io.containerd.run
time.v2.task/k8s.io/7ee8c6a3f397fe26e378812cf2775f1f109687544f2146de5fc5439dd7178994/rootfs","created":"2022-11-01T23:08:50.930366595Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-proxy:v1.24.4","io.kubernetes.cri.sandbox-id":"57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62","io.kubernetes.cri.sandbox-name":"kube-proxy-mprfx","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931","pid":3276,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931/rootfs","created":"2022-11-01T23:09:28.020513803Z","annotations":{"io.kubernetes.cri.container-type":"sandbox",
"io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-230809_bfce36eaaffbf2f7db1c9f4256edcaf8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45","pid":2566,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45/rootfs","created":"2022-11-01T23:08:58.223128026Z","annotations":{"io.kubernetes.cri.conta
iner-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-r4qft_93ea1e43-1509-4751-a91c-ee8a9f43f870","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-r4qft","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1","pid":3285,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1/rootfs","created":"2022-11-01T23:09:28.02269692Z","annotations":{"io.kubernet
es.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-230809_9ccdbc12c48dbd243a9d0335dcf93bfa","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463","pid":3536,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463/rootfs","created":"2022-11-01T23:09:29.
630532491Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-230809_440b295b0419a8945c07a1ed44f1a55e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9ded41f184a166bebee6a50d64b0b367809e660f89c0b1ce2a62444c6e4b41be","pid":2426,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ded41f184a166bebee6a50d64b0b367809e660f89c0b1ce2a62444c6e4b41be","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ded41f184a166bebee6a50d64b0b367809e660f89c0b1ce2a62444c6e4b41be/rootfs","created":
"2022-11-01T23:08:54.212636774Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20221004-44d545d1","io.kubernetes.cri.sandbox-id":"25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1","io.kubernetes.cri.sandbox-name":"kindnet-55wll","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8","pid":1503,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8/rootfs","created":"2022-11-01T23:08:30.4665045Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","
io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-230809_440b295b0419a8945c07a1ed44f1a55e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05","pid":3584,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05/rootfs","created":"2022-11-01T23:09:29.729675697Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.san
dbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-r4qft_93ea1e43-1509-4751-a91c-ee8a9f43f870","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-r4qft","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad","pid":1507,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad/rootfs","created":"2022-11-01T23:08:30.46654145Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubern
etes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-230809_bfce36eaaffbf2f7db1c9f4256edcaf8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd172ea470fc40ad874a284744d5a1a13d57e995841c396c38486a6f33c6f8d6","pid":2623,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd172ea470fc40ad874a284744d5a1a13d57e995841c396c38486a6f33c6f8d6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd172ea470fc40ad874a284744d5a1a13d57e995841c396c38486a6f33c6f8d6/rootfs","created":"2022-11-01T23:08:58.356220401Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"c
ontainer","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-r4qft","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"da4bf7bca87139a71ef6f0a31575ff28a6859f2aed2ec16dcfa66747379cf74a","pid":1630,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da4bf7bca87139a71ef6f0a31575ff28a6859f2aed2ec16dcfa66747379cf74a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da4bf7bca87139a71ef6f0a31575ff28a6859f2aed2ec16dcfa66747379cf74a/rootfs","created":"2022-11-01T23:08:30.715566758Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-apiserver:v1.24.4","io.kubernetes.cri.sandbox-id":"bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8","io.k
ubernetes.cri.sandbox-name":"kube-apiserver-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dab9fa14795a59ae797dda31ce5ba435a42d6527260c2ca0eecb6fce88c92b16","pid":1633,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dab9fa14795a59ae797dda31ce5ba435a42d6527260c2ca0eecb6fce88c92b16","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dab9fa14795a59ae797dda31ce5ba435a42d6527260c2ca0eecb6fce88c92b16/rootfs","created":"2022-11-01T23:08:30.71207489Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-controller-manager:v1.24.4","io.kubernetes.cri.sandbox-id":"e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersi
on":"1.0.2-dev","id":"dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8","pid":3660,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8/rootfs","created":"2022-11-01T23:09:31.863802538Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-proxy:v1.24.4","io.kubernetes.cri.sandbox-id":"f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce","io.kubernetes.cri.sandbox-name":"kube-proxy-mprfx","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dc88b2919fcdf18151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7","pid":3466,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc88b2919fcdf18
151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc88b2919fcdf18151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7/rootfs","created":"2022-11-01T23:09:29.524514538Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"dc88b2919fcdf18151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-230809_37b967577315f9064699b525aec41d0d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f","pid":1504,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1a311b6963f69
909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f/rootfs","created":"2022-11-01T23:08:30.466601473Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-230809_9ccdbc12c48dbd243a9d0335dcf93bfa","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e573179385efbfc5589bf55166de007377f175e59fa595f7337fe85674e6e993","pid":1632,"status":"running","bundle":"/run/containerd/io.container
d.runtime.v2.task/k8s.io/e573179385efbfc5589bf55166de007377f175e59fa595f7337fe85674e6e993","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e573179385efbfc5589bf55166de007377f175e59fa595f7337fe85674e6e993/rootfs","created":"2022-11-01T23:08:30.715174165Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri.sandbox-id":"4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424","io.kubernetes.cri.sandbox-name":"etcd-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a524265b0003fa3f0aa","pid":3538,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a524265b0003fa3f0aa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a
524265b0003fa3f0aa/rootfs","created":"2022-11-01T23:09:29.63434432Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a524265b0003fa3f0aa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-55wll_18a63bc3-b29d-45a5-98a8-3f37cfef3c7b","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-55wll","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460b949272bba5","pid":3546,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460b949272bba5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460
b949272bba5/rootfs","created":"2022-11-01T23:09:29.633496847Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460b949272bba5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_2eb4b78f-b029-431c-a5b6-34253c21c6ae","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce","pid":3283,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d
9cce/rootfs","created":"2022-11-01T23:09:28.022341914Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-mprfx_c323cc25-2fa6-4edf-b36c-03da66892a50","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-mprfx","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1","pid":2565,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1/rootfs",
"created":"2022-11-01T23:08:58.221992861Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_2eb4b78f-b029-431c-a5b6-34253c21c6ae","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"}]
I1101 23:09:40.736083 127145 cri.go:124] list returned 25 containers
I1101 23:09:40.736101 127145 cri.go:127] container: {ID:12f63aa1ca7d1ffe1d6116a2508352b1aa495e5ddea1b61f00d22cbd3da01cb5 Status:running}
I1101 23:09:40.736119 127145 cri.go:129] skipping 12f63aa1ca7d1ffe1d6116a2508352b1aa495e5ddea1b61f00d22cbd3da01cb5 - not in ps
I1101 23:09:40.736130 127145 cri.go:127] container: {ID:25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1 Status:running}
I1101 23:09:40.736144 127145 cri.go:129] skipping 25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1 - not in ps
I1101 23:09:40.736156 127145 cri.go:127] container: {ID:4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424 Status:running}
I1101 23:09:40.736169 127145 cri.go:129] skipping 4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424 - not in ps
I1101 23:09:40.736180 127145 cri.go:127] container: {ID:57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62 Status:running}
I1101 23:09:40.736192 127145 cri.go:129] skipping 57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62 - not in ps
I1101 23:09:40.736204 127145 cri.go:127] container: {ID:6d7033082e21d9b2838c0b0308241f7d2ab87fb855475429560ba43a20e96468 Status:running}
I1101 23:09:40.736221 127145 cri.go:129] skipping 6d7033082e21d9b2838c0b0308241f7d2ab87fb855475429560ba43a20e96468 - not in ps
I1101 23:09:40.736232 127145 cri.go:127] container: {ID:7ee8c6a3f397fe26e378812cf2775f1f109687544f2146de5fc5439dd7178994 Status:running}
I1101 23:09:40.736240 127145 cri.go:129] skipping 7ee8c6a3f397fe26e378812cf2775f1f109687544f2146de5fc5439dd7178994 - not in ps
I1101 23:09:40.736246 127145 cri.go:127] container: {ID:84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931 Status:running}
I1101 23:09:40.736255 127145 cri.go:129] skipping 84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931 - not in ps
I1101 23:09:40.736266 127145 cri.go:127] container: {ID:8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45 Status:running}
I1101 23:09:40.736278 127145 cri.go:129] skipping 8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45 - not in ps
I1101 23:09:40.736289 127145 cri.go:127] container: {ID:969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1 Status:running}
I1101 23:09:40.736300 127145 cri.go:129] skipping 969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1 - not in ps
I1101 23:09:40.736305 127145 cri.go:127] container: {ID:9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463 Status:running}
I1101 23:09:40.736313 127145 cri.go:129] skipping 9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463 - not in ps
I1101 23:09:40.736320 127145 cri.go:127] container: {ID:9ded41f184a166bebee6a50d64b0b367809e660f89c0b1ce2a62444c6e4b41be Status:running}
I1101 23:09:40.736333 127145 cri.go:129] skipping 9ded41f184a166bebee6a50d64b0b367809e660f89c0b1ce2a62444c6e4b41be - not in ps
I1101 23:09:40.736343 127145 cri.go:127] container: {ID:bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8 Status:running}
I1101 23:09:40.736355 127145 cri.go:129] skipping bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8 - not in ps
I1101 23:09:40.736366 127145 cri.go:127] container: {ID:c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05 Status:running}
I1101 23:09:40.736378 127145 cri.go:129] skipping c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05 - not in ps
I1101 23:09:40.736388 127145 cri.go:127] container: {ID:cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad Status:running}
I1101 23:09:40.736397 127145 cri.go:129] skipping cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad - not in ps
I1101 23:09:40.736411 127145 cri.go:127] container: {ID:cd172ea470fc40ad874a284744d5a1a13d57e995841c396c38486a6f33c6f8d6 Status:running}
I1101 23:09:40.736429 127145 cri.go:129] skipping cd172ea470fc40ad874a284744d5a1a13d57e995841c396c38486a6f33c6f8d6 - not in ps
I1101 23:09:40.736440 127145 cri.go:127] container: {ID:da4bf7bca87139a71ef6f0a31575ff28a6859f2aed2ec16dcfa66747379cf74a Status:running}
I1101 23:09:40.736458 127145 cri.go:129] skipping da4bf7bca87139a71ef6f0a31575ff28a6859f2aed2ec16dcfa66747379cf74a - not in ps
I1101 23:09:40.736470 127145 cri.go:127] container: {ID:dab9fa14795a59ae797dda31ce5ba435a42d6527260c2ca0eecb6fce88c92b16 Status:running}
I1101 23:09:40.736483 127145 cri.go:129] skipping dab9fa14795a59ae797dda31ce5ba435a42d6527260c2ca0eecb6fce88c92b16 - not in ps
I1101 23:09:40.736493 127145 cri.go:127] container: {ID:dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8 Status:running}
I1101 23:09:40.736502 127145 cri.go:133] skipping {dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8 running}: state = "running", want "paused"
I1101 23:09:40.736517 127145 cri.go:127] container: {ID:dc88b2919fcdf18151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7 Status:running}
I1101 23:09:40.736530 127145 cri.go:129] skipping dc88b2919fcdf18151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7 - not in ps
I1101 23:09:40.736541 127145 cri.go:127] container: {ID:e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f Status:running}
I1101 23:09:40.736553 127145 cri.go:129] skipping e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f - not in ps
I1101 23:09:40.736564 127145 cri.go:127] container: {ID:e573179385efbfc5589bf55166de007377f175e59fa595f7337fe85674e6e993 Status:running}
I1101 23:09:40.736576 127145 cri.go:129] skipping e573179385efbfc5589bf55166de007377f175e59fa595f7337fe85674e6e993 - not in ps
I1101 23:09:40.736586 127145 cri.go:127] container: {ID:ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a524265b0003fa3f0aa Status:running}
I1101 23:09:40.736594 127145 cri.go:129] skipping ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a524265b0003fa3f0aa - not in ps
I1101 23:09:40.736603 127145 cri.go:127] container: {ID:f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460b949272bba5 Status:running}
I1101 23:09:40.736615 127145 cri.go:129] skipping f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460b949272bba5 - not in ps
I1101 23:09:40.736625 127145 cri.go:127] container: {ID:f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce Status:running}
I1101 23:09:40.736636 127145 cri.go:129] skipping f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce - not in ps
I1101 23:09:40.736643 127145 cri.go:127] container: {ID:f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1 Status:running}
I1101 23:09:40.736658 127145 cri.go:129] skipping f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1 - not in ps
I1101 23:09:40.736704 127145 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1101 23:09:40.745646 127145 kubeadm.go:411] found existing configuration files, will attempt cluster restart
I1101 23:09:40.745673 127145 kubeadm.go:627] restartCluster start
I1101 23:09:40.745722 127145 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1101 23:09:40.753726 127145 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1101 23:09:40.754368 127145 kubeconfig.go:92] found "test-preload-230809" server: "https://192.168.67.2:8443"
I1101 23:09:40.755237 127145 kapi.go:59] client config for test-preload-230809: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/client.crt", KeyFile:"/home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/client.key", CAFile:"/home/jenkins/minikube-integration/15232-6112/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786820), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1101 23:09:40.755875 127145 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1101 23:09:40.763523 127145 kubeadm.go:594] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml 2022-11-01 23:08:26.955661256 +0000
+++ /var/tmp/minikube/kubeadm.yaml.new 2022-11-01 23:09:39.941360162 +0000
@@ -38,7 +38,7 @@
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
-kubernetesVersion: v1.24.4
+kubernetesVersion: v1.24.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
-- /stdout --
I1101 23:09:40.763543 127145 kubeadm.go:1114] stopping kube-system containers ...
I1101 23:09:40.763556 127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I1101 23:09:40.763603 127145 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1101 23:09:40.843646 127145 cri.go:87] found id: "e6d03d027c4d1a3003df82cf9d4d3ecff8142ccbfb55e98924ae237bb4f1228c"
I1101 23:09:40.843681 127145 cri.go:87] found id: "514280d446c28719c4288d8933d306bc638eb138421419ac3e8c984443017720"
I1101 23:09:40.843693 127145 cri.go:87] found id: "afef2fbc253ad82a2dfd9afacb362da4a20ca08f1f9d6008377794501540f11a"
I1101 23:09:40.843703 127145 cri.go:87] found id: "dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8"
I1101 23:09:40.843711 127145 cri.go:87] found id: ""
I1101 23:09:40.843719 127145 cri.go:232] Stopping containers: [e6d03d027c4d1a3003df82cf9d4d3ecff8142ccbfb55e98924ae237bb4f1228c 514280d446c28719c4288d8933d306bc638eb138421419ac3e8c984443017720 afef2fbc253ad82a2dfd9afacb362da4a20ca08f1f9d6008377794501540f11a dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8]
I1101 23:09:40.843770 127145 ssh_runner.go:195] Run: which crictl
I1101 23:09:40.847856 127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop e6d03d027c4d1a3003df82cf9d4d3ecff8142ccbfb55e98924ae237bb4f1228c 514280d446c28719c4288d8933d306bc638eb138421419ac3e8c984443017720 afef2fbc253ad82a2dfd9afacb362da4a20ca08f1f9d6008377794501540f11a dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8
I1101 23:09:41.335259 127145 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I1101 23:09:41.402860 127145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1101 23:09:41.410490 127145 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5639 Nov 1 23:08 /etc/kubernetes/admin.conf
-rw------- 1 root root 5656 Nov 1 23:08 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2015 Nov 1 23:08 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5600 Nov 1 23:08 /etc/kubernetes/scheduler.conf
I1101 23:09:41.410554 127145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1101 23:09:41.417229 127145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1101 23:09:41.423830 127145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1101 23:09:41.430364 127145 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I1101 23:09:41.430410 127145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1101 23:09:41.436788 127145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1101 23:09:41.442864 127145 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I1101 23:09:41.442915 127145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1101 23:09:41.448988 127145 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1101 23:09:41.455288 127145 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I1101 23:09:41.455307 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I1101 23:09:41.753172 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I1101 23:09:42.645331 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I1101 23:09:43.006957 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I1101 23:09:43.058116 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I1101 23:09:43.137338 127145 api_server.go:51] waiting for apiserver process to appear ...
I1101 23:09:43.137438 127145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 23:09:43.218088 127145 api_server.go:71] duration metric: took 80.740751ms to wait for apiserver process to appear ...
I1101 23:09:43.218119 127145 api_server.go:87] waiting for apiserver healthz status ...
I1101 23:09:43.218133 127145 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I1101 23:09:43.223783 127145 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
ok
I1101 23:09:43.231489 127145 api_server.go:140] control plane version: v1.24.4
W1101 23:09:43.231532 127145 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1101 23:09:43.733092 127145 api_server.go:140] control plane version: v1.24.4
W1101 23:09:43.733125 127145 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1101 23:09:44.233705 127145 api_server.go:140] control plane version: v1.24.4
W1101 23:09:44.233731 127145 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1101 23:09:44.733150 127145 api_server.go:140] control plane version: v1.24.4
W1101 23:09:44.733179 127145 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1101 23:09:45.233717 127145 api_server.go:140] control plane version: v1.24.4
W1101 23:09:45.233749 127145 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
W1101 23:09:45.732040 127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1101 23:09:46.233010 127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1101 23:09:46.732501 127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1101 23:09:47.232636 127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1101 23:09:47.732455 127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1101 23:09:48.232934 127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1101 23:09:48.732964 127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1101 23:09:49.232994 127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
I1101 23:09:52.022667 127145 api_server.go:140] control plane version: v1.24.6
I1101 23:09:52.022755 127145 api_server.go:130] duration metric: took 8.804626822s to wait for apiserver health ...
I1101 23:09:52.022776 127145 cni.go:95] Creating CNI manager for ""
I1101 23:09:52.022793 127145 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I1101 23:09:52.025189 127145 out.go:177] * Configuring CNI (Container Networking Interface) ...
I1101 23:09:52.026860 127145 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I1101 23:09:52.033655 127145 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.6/kubectl ...
I1101 23:09:52.033680 127145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I1101 23:09:52.223817 127145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1101 23:09:52.990696 127145 system_pods.go:43] waiting for kube-system pods to appear ...
I1101 23:09:52.997505 127145 system_pods.go:59] 8 kube-system pods found
I1101 23:09:52.997541 127145 system_pods.go:61] "coredns-6d4b75cb6d-r4qft" [93ea1e43-1509-4751-a91c-ee8a9f43f870] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1101 23:09:52.997551 127145 system_pods.go:61] "etcd-test-preload-230809" [af6823c1-4191-4b7b-b864-c8d4dc5b60b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1101 23:09:52.997561 127145 system_pods.go:61] "kindnet-55wll" [18a63bc3-b29d-45a5-98a8-3f37cfef3c7b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I1101 23:09:52.997568 127145 system_pods.go:61] "kube-apiserver-test-preload-230809" [7c4baec2-c5b0-4a19-b41f-c54723a6cb9d] Pending
I1101 23:09:52.997578 127145 system_pods.go:61] "kube-controller-manager-test-preload-230809" [61a6d202-4552-4719-bfd5-7e9295cc25b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1101 23:09:52.997598 127145 system_pods.go:61] "kube-proxy-mprfx" [c323cc25-2fa6-4edf-b36c-03da66892a50] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1101 23:09:52.997611 127145 system_pods.go:61] "kube-scheduler-test-preload-230809" [ae2815cc-6736-4e49-b3c8-8abeaeeea1bd] Pending
I1101 23:09:52.997623 127145 system_pods.go:61] "storage-provisioner" [2eb4b78f-b029-431c-a5b6-34253c21c6ae] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1101 23:09:52.997635 127145 system_pods.go:74] duration metric: took 6.918381ms to wait for pod list to return data ...
I1101 23:09:52.997648 127145 node_conditions.go:102] verifying NodePressure condition ...
I1101 23:09:52.999970 127145 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1101 23:09:53.000003 127145 node_conditions.go:123] node cpu capacity is 8
I1101 23:09:53.000015 127145 node_conditions.go:105] duration metric: took 2.358425ms to run NodePressure ...
I1101 23:09:53.000039 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I1101 23:09:53.234562 127145 kubeadm.go:763] waiting for restarted kubelet to initialise ...
I1101 23:09:53.237990 127145 kubeadm.go:778] kubelet initialised
I1101 23:09:53.238014 127145 kubeadm.go:779] duration metric: took 3.422089ms waiting for restarted kubelet to initialise ...
I1101 23:09:53.238022 127145 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1101 23:09:53.242529 127145 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-r4qft" in "kube-system" namespace to be "Ready" ...
I1101 23:09:55.254763 127145 pod_ready.go:102] pod "coredns-6d4b75cb6d-r4qft" in "kube-system" namespace has status "Ready":"False"
I1101 23:09:57.753901 127145 pod_ready.go:102] pod "coredns-6d4b75cb6d-r4qft" in "kube-system" namespace has status "Ready":"False"
I1101 23:09:59.754592 127145 pod_ready.go:92] pod "coredns-6d4b75cb6d-r4qft" in "kube-system" namespace has status "Ready":"True"
I1101 23:09:59.754626 127145 pod_ready.go:81] duration metric: took 6.512068179s waiting for pod "coredns-6d4b75cb6d-r4qft" in "kube-system" namespace to be "Ready" ...
I1101 23:09:59.754639 127145 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-230809" in "kube-system" namespace to be "Ready" ...
I1101 23:10:01.766834 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:04.264410 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:06.764726 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:09.264989 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:11.265205 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:13.763952 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:15.764164 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:17.764732 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:19.764997 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:22.264415 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:24.764449 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:27.264094 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:29.264748 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:31.764914 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:34.264280 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:36.264981 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:38.765185 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:41.265088 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:43.764636 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:46.265617 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:48.765111 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:51.264670 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:53.264916 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:55.264961 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:57.265052 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:59.764621 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:02.264841 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:04.264932 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:06.764687 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:09.265413 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:11.764819 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:13.765227 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:16.264738 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:18.265154 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:20.764475 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:22.765142 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:25.264490 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:27.265182 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:29.764395 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:31.764559 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:33.765136 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:36.264759 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:38.265094 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:40.764500 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:43.264843 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:45.765686 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:48.264476 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:50.764617 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:52.764701 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:54.765115 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:56.765316 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:59.264346 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:01.264372 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:03.264546 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:05.264956 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:07.764171 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:09.764397 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:11.765095 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:14.264701 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:16.265440 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:18.764276 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:20.764938 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:23.265330 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:25.764449 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:27.764895 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:30.265410 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:32.767373 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:35.265081 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:37.765063 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:40.265350 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:42.765270 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:45.265267 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:47.765107 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:50.265576 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:52.766477 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:55.264930 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:57.765153 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:00.264148 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:02.264609 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:04.265195 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:06.764397 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:08.765157 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:11.264073 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:13.264819 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:15.763483 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:17.763881 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:19.765072 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:21.765183 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:24.265085 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:26.764936 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:29.264520 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:31.265339 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:33.764859 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:36.265232 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:38.764507 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:40.764906 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:42.764962 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:44.765506 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:47.264257 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:49.265001 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:51.765200 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:54.264162 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:56.264864 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:58.764509 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:59.759267 127145 pod_ready.go:81] duration metric: took 4m0.004604004s waiting for pod "etcd-test-preload-230809" in "kube-system" namespace to be "Ready" ...
E1101 23:13:59.759292 127145 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "etcd-test-preload-230809" in "kube-system" namespace to be "Ready" (will not retry!)
I1101 23:13:59.759322 127145 pod_ready.go:38] duration metric: took 4m6.521288423s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1101 23:13:59.759354 127145 kubeadm.go:631] restartCluster took 4m19.013673069s
W1101 23:13:59.759521 127145 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
I1101 23:13:59.759560 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1101 23:14:01.430467 127145 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.670884606s)
I1101 23:14:01.430528 127145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1101 23:14:01.440216 127145 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1101 23:14:01.447136 127145 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1101 23:14:01.447183 127145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1101 23:14:01.453660 127145 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1101 23:14:01.453703 127145 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1101 23:14:01.491674 127145 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
I1101 23:14:01.491746 127145 kubeadm.go:317] [preflight] Running pre-flight checks
I1101 23:14:01.518815 127145 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
I1101 23:14:01.518891 127145 kubeadm.go:317] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
I1101 23:14:01.518924 127145 kubeadm.go:317] [0;37mOS[0m: [0;32mLinux[0m
I1101 23:14:01.519001 127145 kubeadm.go:317] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1101 23:14:01.519091 127145 kubeadm.go:317] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1101 23:14:01.519162 127145 kubeadm.go:317] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1101 23:14:01.519232 127145 kubeadm.go:317] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1101 23:14:01.519307 127145 kubeadm.go:317] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1101 23:14:01.519381 127145 kubeadm.go:317] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1101 23:14:01.519458 127145 kubeadm.go:317] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1101 23:14:01.519533 127145 kubeadm.go:317] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1101 23:14:01.519591 127145 kubeadm.go:317] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1101 23:14:01.591526 127145 kubeadm.go:317] W1101 23:14:01.486750 6857 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I1101 23:14:01.591829 127145 kubeadm.go:317] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
I1101 23:14:01.591936 127145 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1101 23:14:01.592005 127145 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
I1101 23:14:01.592050 127145 kubeadm.go:317] [ERROR Port-2379]: Port 2379 is in use
I1101 23:14:01.592096 127145 kubeadm.go:317] [ERROR Port-2380]: Port 2380 is in use
I1101 23:14:01.592196 127145 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
I1101 23:14:01.592269 127145 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
W1101 23:14:01.592495 127145 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1101 23:14:01.486750 6857 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1101 23:14:01.486750 6857 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
I1101 23:14:01.592536 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1101 23:14:01.906961 127145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1101 23:14:01.916443 127145 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1101 23:14:01.916504 127145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1101 23:14:01.923130 127145 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1101 23:14:01.923166 127145 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1101 23:14:01.960923 127145 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
I1101 23:14:01.960981 127145 kubeadm.go:317] [preflight] Running pre-flight checks
I1101 23:14:01.987846 127145 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
I1101 23:14:01.987918 127145 kubeadm.go:317] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
I1101 23:14:01.987961 127145 kubeadm.go:317] [0;37mOS[0m: [0;32mLinux[0m
I1101 23:14:01.988021 127145 kubeadm.go:317] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1101 23:14:01.988074 127145 kubeadm.go:317] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1101 23:14:01.988115 127145 kubeadm.go:317] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1101 23:14:01.988186 127145 kubeadm.go:317] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1101 23:14:01.988241 127145 kubeadm.go:317] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1101 23:14:01.988304 127145 kubeadm.go:317] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1101 23:14:01.988371 127145 kubeadm.go:317] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1101 23:14:01.988430 127145 kubeadm.go:317] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1101 23:14:01.988521 127145 kubeadm.go:317] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1101 23:14:02.056387 127145 kubeadm.go:317] W1101 23:14:01.956215 7126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I1101 23:14:02.056585 127145 kubeadm.go:317] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
I1101 23:14:02.056677 127145 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1101 23:14:02.056739 127145 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
I1101 23:14:02.056775 127145 kubeadm.go:317] [ERROR Port-2379]: Port 2379 is in use
I1101 23:14:02.056811 127145 kubeadm.go:317] [ERROR Port-2380]: Port 2380 is in use
I1101 23:14:02.056904 127145 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
I1101 23:14:02.057006 127145 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
I1101 23:14:02.057085 127145 kubeadm.go:398] StartCluster complete in 4m21.498557806s
I1101 23:14:02.057126 127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1101 23:14:02.057181 127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1101 23:14:02.079779 127145 cri.go:87] found id: ""
I1101 23:14:02.079803 127145 logs.go:274] 0 containers: []
W1101 23:14:02.079811 127145 logs.go:276] No container was found matching "kube-apiserver"
I1101 23:14:02.079820 127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1101 23:14:02.079867 127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1101 23:14:02.102132 127145 cri.go:87] found id: ""
I1101 23:14:02.103963 127145 logs.go:274] 0 containers: []
W1101 23:14:02.103974 127145 logs.go:276] No container was found matching "etcd"
I1101 23:14:02.103987 127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1101 23:14:02.104037 127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1101 23:14:02.127250 127145 cri.go:87] found id: ""
I1101 23:14:02.127271 127145 logs.go:274] 0 containers: []
W1101 23:14:02.127278 127145 logs.go:276] No container was found matching "coredns"
I1101 23:14:02.127282 127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1101 23:14:02.127329 127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1101 23:14:02.149764 127145 cri.go:87] found id: ""
I1101 23:14:02.149785 127145 logs.go:274] 0 containers: []
W1101 23:14:02.149792 127145 logs.go:276] No container was found matching "kube-scheduler"
I1101 23:14:02.149799 127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1101 23:14:02.149851 127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1101 23:14:02.172459 127145 cri.go:87] found id: ""
I1101 23:14:02.172482 127145 logs.go:274] 0 containers: []
W1101 23:14:02.172488 127145 logs.go:276] No container was found matching "kube-proxy"
I1101 23:14:02.172493 127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1101 23:14:02.172532 127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1101 23:14:02.194215 127145 cri.go:87] found id: ""
I1101 23:14:02.194240 127145 logs.go:274] 0 containers: []
W1101 23:14:02.194246 127145 logs.go:276] No container was found matching "kubernetes-dashboard"
I1101 23:14:02.194252 127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1101 23:14:02.194295 127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1101 23:14:02.215924 127145 cri.go:87] found id: ""
I1101 23:14:02.215945 127145 logs.go:274] 0 containers: []
W1101 23:14:02.215951 127145 logs.go:276] No container was found matching "storage-provisioner"
I1101 23:14:02.215961 127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1101 23:14:02.216007 127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1101 23:14:02.237525 127145 cri.go:87] found id: ""
I1101 23:14:02.237548 127145 logs.go:274] 0 containers: []
W1101 23:14:02.237556 127145 logs.go:276] No container was found matching "kube-controller-manager"
I1101 23:14:02.237568 127145 logs.go:123] Gathering logs for kubelet ...
I1101 23:14:02.237581 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1101 23:14:02.300252 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.121441 4572 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
W1101 23:14:02.300464 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: E1101 23:09:52.121486 4572 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
W1101 23:14:02.300712 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.134778 4572 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
W1101 23:14:02.300934 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: E1101 23:09:52.134833 4572 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
W1101 23:14:02.301104 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.135478 4572 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
W1101 23:14:02.301295 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: E1101 23:09:52.135507 4572 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
W1101 23:14:02.302724 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.043911 4572 projected.go:192] Error preparing data for projected volume kube-api-access-mxxnh for pod kube-system/kindnet-55wll: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out waiting for the condition]
W1101 23:14:02.303262 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.044015 4572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18a63bc3-b29d-45a5-98a8-3f37cfef3c7b-kube-api-access-mxxnh podName:18a63bc3-b29d-45a5-98a8-3f37cfef3c7b nodeName:}" failed. No retries permitted until 2022-11-01 23:09:55.043985609 +0000 UTC m=+12.036634856 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-mxxnh" (UniqueName: "kubernetes.io/projected/18a63bc3-b29d-45a5-98a8-3f37cfef3c7b-kube-api-access-mxxnh") pod "kindnet-55wll" (UID: "18a63bc3-b29d-45a5-98a8-3f37cfef3c7b") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out wait
ing for the condition]
W1101 23:14:02.303497 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.044035 4572 projected.go:192] Error preparing data for projected volume kube-api-access-k9mj5 for pod kube-system/kube-proxy-mprfx: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out waiting for the condition]
W1101 23:14:02.303931 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.044128 4572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c323cc25-2fa6-4edf-b36c-03da66892a50-kube-api-access-k9mj5 podName:c323cc25-2fa6-4edf-b36c-03da66892a50 nodeName:}" failed. No retries permitted until 2022-11-01 23:09:55.04409823 +0000 UTC m=+12.036747482 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-k9mj5" (UniqueName: "kubernetes.io/projected/c323cc25-2fa6-4edf-b36c-03da66892a50-kube-api-access-k9mj5") pod "kube-proxy-mprfx" (UID: "c323cc25-2fa6-4edf-b36c-03da66892a50") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out
waiting for the condition]
W1101 23:14:02.304244 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.122285 4572 projected.go:192] Error preparing data for projected volume kube-api-access-wfqx2 for pod kube-system/storage-provisioner: [failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out waiting for the condition]
W1101 23:14:02.304666 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.122380 4572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2eb4b78f-b029-431c-a5b6-34253c21c6ae-kube-api-access-wfqx2 podName:2eb4b78f-b029-431c-a5b6-34253c21c6ae nodeName:}" failed. No retries permitted until 2022-11-01 23:09:55.122350449 +0000 UTC m=+12.114999680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wfqx2" (UniqueName: "kubernetes.io/projected/2eb4b78f-b029-431c-a5b6-34253c21c6ae-kube-api-access-wfqx2") pod "storage-provisioner" (UID: "2eb4b78f-b029-431c-a5b6-34253c21c6ae") : [failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cac
he: timed out waiting for the condition]
W1101 23:14:02.305088 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.136572 4572 projected.go:192] Error preparing data for projected volume kube-api-access-2k56t for pod kube-system/coredns-6d4b75cb6d-r4qft: [failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out waiting for the condition]
W1101 23:14:02.305507 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.136676 4572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/93ea1e43-1509-4751-a91c-ee8a9f43f870-kube-api-access-2k56t podName:93ea1e43-1509-4751-a91c-ee8a9f43f870 nodeName:}" failed. No retries permitted until 2022-11-01 23:09:54.136638953 +0000 UTC m=+11.129288201 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2k56t" (UniqueName: "kubernetes.io/projected/93ea1e43-1509-4751-a91c-ee8a9f43f870-kube-api-access-2k56t") pod "coredns-6d4b75cb6d-r4qft" (UID: "93ea1e43-1509-4751-a91c-ee8a9f43f870") : [failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: tim
ed out waiting for the condition]
I1101 23:14:02.328158 127145 logs.go:123] Gathering logs for dmesg ...
I1101 23:14:02.328187 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1101 23:14:02.342140 127145 logs.go:123] Gathering logs for describe nodes ...
I1101 23:14:02.342171 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1101 23:14:02.477646 127145 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1101 23:14:02.477672 127145 logs.go:123] Gathering logs for containerd ...
I1101 23:14:02.477684 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1101 23:14:02.532567 127145 logs.go:123] Gathering logs for container status ...
I1101 23:14:02.532606 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1101 23:14:02.557929 127145 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1101 23:14:01.956215 7126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W1101 23:14:02.557965 127145 out.go:239] *
*
W1101 23:14:02.558080 127145 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1101 23:14:01.956215 7126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1101 23:14:01.956215 7126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W1101 23:14:02.558101 127145 out.go:239] *
*
W1101 23:14:02.558873 127145 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1101 23:14:02.561381 127145 out.go:177] X Problems detected in kubelet:
I1101 23:14:02.562697 127145 out.go:177] Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.121441 4572 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
I1101 23:14:02.564125 127145 out.go:177] Nov 01 23:09:52 test-preload-230809 kubelet[4572]: E1101 23:09:52.121486 4572 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
I1101 23:14:02.565464 127145 out.go:177] Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.134778 4572 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
I1101 23:14:02.568183 127145 out.go:177]
W1101 23:14:02.569498 127145 out.go:239] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1101 23:14:01.956215 7126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1101 23:14:01.956215 7126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W1101 23:14:02.569611 127145 out.go:239] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
* Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
W1101 23:14:02.569659 127145 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/5484
* Related issue: https://github.com/kubernetes/minikube/issues/5484
I1101 23:14:02.571762 127145 out.go:177]
** /stderr **
preload_test.go:69: out/minikube-linux-amd64 start -p test-preload-230809 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --container-runtime=containerd --kubernetes-version=v1.24.6 failed: exit status 81
panic.go:522: *** TestPreload FAILED at 2022-11-01 23:14:02.608867734 +0000 UTC m=+1751.745324176
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect test-preload-230809
helpers_test.go:235: (dbg) docker inspect test-preload-230809:
-- stdout --
[
{
"Id": "1b57f8fa7ffe3fa2fb6b495b0ae4fae337a81e9c8a685f4b1b889dba1bef8a8d",
"Created": "2022-11-01T23:08:11.051243831Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 123958,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-11-01T23:08:11.72901206Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
"ResolvConfPath": "/var/lib/docker/containers/1b57f8fa7ffe3fa2fb6b495b0ae4fae337a81e9c8a685f4b1b889dba1bef8a8d/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/1b57f8fa7ffe3fa2fb6b495b0ae4fae337a81e9c8a685f4b1b889dba1bef8a8d/hostname",
"HostsPath": "/var/lib/docker/containers/1b57f8fa7ffe3fa2fb6b495b0ae4fae337a81e9c8a685f4b1b889dba1bef8a8d/hosts",
"LogPath": "/var/lib/docker/containers/1b57f8fa7ffe3fa2fb6b495b0ae4fae337a81e9c8a685f4b1b889dba1bef8a8d/1b57f8fa7ffe3fa2fb6b495b0ae4fae337a81e9c8a685f4b1b889dba1bef8a8d-json.log",
"Name": "/test-preload-230809",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"test-preload-230809:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "test-preload-230809",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/709173a0301dc6c7f2d3648daeebfba94871f1297e5d6dc74beb24a9558aace6-init/diff:/var/lib/docker/overlay2/3304d2e292dd827b741fa7e7dfa0dd06c735a2abf2639025717eb96733168a33/diff:/var/lib/docker/overlay2/f66a2ec830111a507a160d2f7f58d1ab0df8159096f23d5da74ca81116f032a4/diff:/var/lib/docker/overlay2/58562370bf5535a09b5f3ac667ae66ace0239a84b1724c693027cd984380e69d/diff:/var/lib/docker/overlay2/ad70e4fabb7d3b3f908814730456a6f69256cb5bf3f6281cf2e1de2d9ad6e620/diff:/var/lib/docker/overlay2/372e614731843da3a6a8586e11682dd7031ded66b212170eab90ed3974b91656/diff:/var/lib/docker/overlay2/0d5e9529a6b310e7de135cb901fad0589f42c74f315a8d227b3f1058a0635d3a/diff:/var/lib/docker/overlay2/68e9f113391c7a1cb7cf63712d04a796653c1b7efd904081fd8696e3142066cb/diff:/var/lib/docker/overlay2/25d5a308de1516fe45d18cc8d3b35ae4e3de5999ad6bffc678475b1fa74ce54c/diff:/var/lib/docker/overlay2/4fbedef0e02e22b00c09b167edef3a01d1baaa6ae2581ce1816acceb7b82904f/diff:/var/lib/docker/overlay2/237634
e28f08af84128abf2ca5885d71bf5f916d63c6088eb178b0729931f43f/diff:/var/lib/docker/overlay2/c1e44e9be7cdbbc0eecc5b798955e90ab62ff8e89d859ab692d424b63f8db9a1/diff:/var/lib/docker/overlay2/945c70a7d8c420004bb39705628a454a575ae067a91da51362818da5f64779bc/diff:/var/lib/docker/overlay2/ed05d73c801ea52b22e058a7fa685c4412453d8e5f0af711d6c43dc75ea9f082/diff:/var/lib/docker/overlay2/4f5b59c087860f39c4b24105ac4677a11a5167aec2093628c48e263d18b25f68/diff:/var/lib/docker/overlay2/5535048bf0d8af7ed100e4121cd2d5d8b776a0155a6edccc3bea22e753d8597b/diff:/var/lib/docker/overlay2/51c67944173d540bb52c33e409e2cfb8d381dc5a649d02e5599384faf4caa6ff/diff:/var/lib/docker/overlay2/5a530f1cc647ab6a7e5fbe252ffbfada764bc01fee20f5f70ad2ebe08b60c7c5/diff:/var/lib/docker/overlay2/d4472d58828ae545a5beec970f632730af916c03aea959ec3ec7d64a0579b1ea/diff:/var/lib/docker/overlay2/6b823f45daca0146f21cbfbe06e22b48fd5bf7fcf086765dde5c36cc5ae90aed/diff:/var/lib/docker/overlay2/54b88f4723cfc7221b7f0789d171797ed1328bd24d62508bfa456753f3e5c2bc/diff:/var/lib/d
ocker/overlay2/44599d073f725ff40c4736e9287865ef0372f691d010db33ba7bf69574f74aca/diff:/var/lib/docker/overlay2/68defae06f1c119684bbec2cd0b360da76b8ab455d9a617b0b16ea22bd3617c5/diff:/var/lib/docker/overlay2/2dd86bf6ab6202500623423a11736ce7c2c96ebe5d83bb039f44f0d4981510b4/diff:/var/lib/docker/overlay2/335010880e7bbb7689d4210cb09578047fa8d34b6ded18dcc4d3d5a6cc4287fb/diff:/var/lib/docker/overlay2/d73ca7e5b5a047dfc79343e02709bae69f2414aaed6f2830edbd022af4e1e145/diff:/var/lib/docker/overlay2/dae580a357bf83dff3b3b546fb9cda97e6511f710c236784c68ce84657fb0337/diff:/var/lib/docker/overlay2/1842e3044746991dda288e11a2bee8a8857d749595d769968b661a0994c25215/diff:/var/lib/docker/overlay2/3fba19b5de3fbb9f62126949163b914e6dd8efdb65c12afd6e6d56214581b8a6/diff:/var/lib/docker/overlay2/6ec508232bae92f0262e74463db095e79b446d6658a903f74d6d9275dae17d55/diff:/var/lib/docker/overlay2/653b5d92bafd148a58b3febd568fb54d9ba1f3b109cac8e277d5177a216868c1/diff:/var/lib/docker/overlay2/5fb2dc662190229810bebc6d79e918be90b416edb8ee1e20e951e803195
3d813/diff:/var/lib/docker/overlay2/6484c79c5b005c0d8eef871cad9010368b5332e697cb3a01cc7cc94bfed33376/diff:/var/lib/docker/overlay2/81e5b96e2d4c2697e1c6962beb6e71da710754f42e32a941f732c4efab850973/diff:/var/lib/docker/overlay2/85036ccfe63574469e3678df6445e614574f07f77c334997fac7f3ee217f5c54/diff:/var/lib/docker/overlay2/7ff8315528872300329fdbd17f11d0ea04ab7c7778244a12bc621ae84f12cf77/diff:/var/lib/docker/overlay2/c32e188bd4ec64d8f716b7885ce228c89a3c4f2777d3e33ed448911d38ceba55/diff:/var/lib/docker/overlay2/142e8c88931b6205839c329cc5ab1f40b06e30f547860d743f6d571c95a75b91/diff:/var/lib/docker/overlay2/21f148a35621027811131428e59ec3709b661b2a56e8ebfee2a95b3cdfb407e7/diff:/var/lib/docker/overlay2/9111530a9968c33f38dab8aebccd5d93acbd8d331124b7d12a0da63f86ae5768/diff:/var/lib/docker/overlay2/59aee9dd537a039e02b73dce312bf35f6cd3d34146c96208a1461e4c82a284ca/diff:/var/lib/docker/overlay2/3e4cb9f6fecb0597fc001ef0ad000a46fd7410c70475a6e8d6fb98e6d5c4f42a/diff:/var/lib/docker/overlay2/90181e6f161e52f087dda33985e81570a08027
27ab8282224c85a24bea25782e/diff",
"MergedDir": "/var/lib/docker/overlay2/709173a0301dc6c7f2d3648daeebfba94871f1297e5d6dc74beb24a9558aace6/merged",
"UpperDir": "/var/lib/docker/overlay2/709173a0301dc6c7f2d3648daeebfba94871f1297e5d6dc74beb24a9558aace6/diff",
"WorkDir": "/var/lib/docker/overlay2/709173a0301dc6c7f2d3648daeebfba94871f1297e5d6dc74beb24a9558aace6/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "test-preload-230809",
"Source": "/var/lib/docker/volumes/test-preload-230809/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "test-preload-230809",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "test-preload-230809",
"name.minikube.sigs.k8s.io": "test-preload-230809",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "f41d9d697caf40359c40c070db997896587378c62f9b32141291ebfef9d888f7",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49277"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49276"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49273"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49275"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49274"
}
]
},
"SandboxKey": "/var/run/docker/netns/f41d9d697caf",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"test-preload-230809": {
"IPAMConfig": {
"IPv4Address": "192.168.67.2"
},
"Links": null,
"Aliases": [
"1b57f8fa7ffe",
"test-preload-230809"
],
"NetworkID": "ef9d1cae9ccd2ec6eccff63562d3f31087cc5f69489a45cf0405ab8b12bd43b5",
"EndpointID": "cfc33f91e4454c4b1ad1c7fe93f0b9346d14231539f2daad0e814e7356820704",
"Gateway": "192.168.67.1",
"IPAddress": "192.168.67.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:43:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-230809 -n test-preload-230809
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-230809 -n test-preload-230809: exit status 2 (335.778728ms)
-- stdout --
Running
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p test-preload-230809 logs -n 25
helpers_test.go:252: TestPreload logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| ssh | multinode-225952 ssh -n | multinode-225952 | jenkins | v1.27.1 | 01 Nov 22 23:02 UTC | 01 Nov 22 23:02 UTC |
| | multinode-225952-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-225952 cp multinode-225952-m03:/home/docker/cp-test.txt | multinode-225952 | jenkins | v1.27.1 | 01 Nov 22 23:02 UTC | 01 Nov 22 23:02 UTC |
| | multinode-225952:/home/docker/cp-test_multinode-225952-m03_multinode-225952.txt | | | | | |
| ssh | multinode-225952 ssh -n | multinode-225952 | jenkins | v1.27.1 | 01 Nov 22 23:02 UTC | 01 Nov 22 23:02 UTC |
| | multinode-225952-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-225952 ssh -n multinode-225952 sudo cat | multinode-225952 | jenkins | v1.27.1 | 01 Nov 22 23:02 UTC | 01 Nov 22 23:02 UTC |
| | /home/docker/cp-test_multinode-225952-m03_multinode-225952.txt | | | | | |
| cp | multinode-225952 cp multinode-225952-m03:/home/docker/cp-test.txt | multinode-225952 | jenkins | v1.27.1 | 01 Nov 22 23:02 UTC | 01 Nov 22 23:02 UTC |
| | multinode-225952-m02:/home/docker/cp-test_multinode-225952-m03_multinode-225952-m02.txt | | | | | |
| ssh | multinode-225952 ssh -n | multinode-225952 | jenkins | v1.27.1 | 01 Nov 22 23:02 UTC | 01 Nov 22 23:02 UTC |
| | multinode-225952-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-225952 ssh -n multinode-225952-m02 sudo cat | multinode-225952 | jenkins | v1.27.1 | 01 Nov 22 23:02 UTC | 01 Nov 22 23:02 UTC |
| | /home/docker/cp-test_multinode-225952-m03_multinode-225952-m02.txt | | | | | |
| node | multinode-225952 node stop m03 | multinode-225952 | jenkins | v1.27.1 | 01 Nov 22 23:02 UTC | 01 Nov 22 23:02 UTC |
| node | multinode-225952 node start | multinode-225952 | jenkins | v1.27.1 | 01 Nov 22 23:02 UTC | 01 Nov 22 23:02 UTC |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-225952 | multinode-225952 | jenkins | v1.27.1 | 01 Nov 22 23:02 UTC | |
| stop | -p multinode-225952 | multinode-225952 | jenkins | v1.27.1 | 01 Nov 22 23:02 UTC | 01 Nov 22 23:03 UTC |
| start | -p multinode-225952 | multinode-225952 | jenkins | v1.27.1 | 01 Nov 22 23:03 UTC | 01 Nov 22 23:05 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-225952 | multinode-225952 | jenkins | v1.27.1 | 01 Nov 22 23:05 UTC | |
| node | multinode-225952 node delete | multinode-225952 | jenkins | v1.27.1 | 01 Nov 22 23:05 UTC | 01 Nov 22 23:05 UTC |
| | m03 | | | | | |
| stop | multinode-225952 stop | multinode-225952 | jenkins | v1.27.1 | 01 Nov 22 23:05 UTC | 01 Nov 22 23:06 UTC |
| start | -p multinode-225952 | multinode-225952 | jenkins | v1.27.1 | 01 Nov 22 23:06 UTC | 01 Nov 22 23:07 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| node | list -p multinode-225952 | multinode-225952 | jenkins | v1.27.1 | 01 Nov 22 23:07 UTC | |
| start | -p multinode-225952-m02 | multinode-225952-m02 | jenkins | v1.27.1 | 01 Nov 22 23:07 UTC | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p multinode-225952-m03 | multinode-225952-m03 | jenkins | v1.27.1 | 01 Nov 22 23:07 UTC | 01 Nov 22 23:08 UTC |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| node | add -p multinode-225952 | multinode-225952 | jenkins | v1.27.1 | 01 Nov 22 23:08 UTC | |
| delete | -p multinode-225952-m03 | multinode-225952-m03 | jenkins | v1.27.1 | 01 Nov 22 23:08 UTC | 01 Nov 22 23:08 UTC |
| delete | -p multinode-225952 | multinode-225952 | jenkins | v1.27.1 | 01 Nov 22 23:08 UTC | 01 Nov 22 23:08 UTC |
| start | -p test-preload-230809 | test-preload-230809 | jenkins | v1.27.1 | 01 Nov 22 23:08 UTC | 01 Nov 22 23:09 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.4 | | | | | |
| ssh | -p test-preload-230809 | test-preload-230809 | jenkins | v1.27.1 | 01 Nov 22 23:09 UTC | 01 Nov 22 23:09 UTC |
| | -- sudo crictl pull | | | | | |
| | gcr.io/k8s-minikube/busybox | | | | | |
| start | -p test-preload-230809 | test-preload-230809 | jenkins | v1.27.1 | 01 Nov 22 23:09 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --wait=true --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.6 | | | | | |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/11/01 23:09:02
Running on machine: ubuntu-20-agent-10
Binary: Built with gc go1.19.2 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1101 23:09:02.101256 127145 out.go:296] Setting OutFile to fd 1 ...
I1101 23:09:02.101369 127145 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 23:09:02.101380 127145 out.go:309] Setting ErrFile to fd 2...
I1101 23:09:02.101385 127145 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 23:09:02.101473 127145 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15232-6112/.minikube/bin
I1101 23:09:02.101987 127145 out.go:303] Setting JSON to false
I1101 23:09:02.102936 127145 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3088,"bootTime":1667341054,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1101 23:09:02.102996 127145 start.go:126] virtualization: kvm guest
I1101 23:09:02.105803 127145 out.go:177] * [test-preload-230809] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
I1101 23:09:02.107347 127145 notify.go:220] Checking for updates...
I1101 23:09:02.108879 127145 out.go:177] - MINIKUBE_LOCATION=15232
I1101 23:09:02.110538 127145 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1101 23:09:02.112123 127145 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15232-6112/kubeconfig
I1101 23:09:02.113662 127145 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15232-6112/.minikube
I1101 23:09:02.115184 127145 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1101 23:09:02.116881 127145 config.go:180] Loaded profile config "test-preload-230809": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I1101 23:09:02.118764 127145 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
I1101 23:09:02.120144 127145 driver.go:365] Setting default libvirt URI to qemu:///system
I1101 23:09:02.148923 127145 docker.go:137] docker version: linux-20.10.21
I1101 23:09:02.149004 127145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1101 23:09:02.241848 127145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:39 SystemTime:2022-11-01 23:09:02.16794253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1101 23:09:02.241979 127145 docker.go:254] overlay module found
I1101 23:09:02.245118 127145 out.go:177] * Using the docker driver based on existing profile
I1101 23:09:02.246572 127145 start.go:282] selected driver: docker
I1101 23:09:02.246590 127145 start.go:808] validating driver "docker" against &{Name:test-preload-230809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-230809 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1101 23:09:02.246667 127145 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1101 23:09:02.247466 127145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1101 23:09:02.338554 127145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:39 SystemTime:2022-11-01 23:09:02.266470239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1101 23:09:02.338791 127145 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1101 23:09:02.338813 127145 cni.go:95] Creating CNI manager for ""
I1101 23:09:02.338820 127145 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I1101 23:09:02.338831 127145 start_flags.go:317] config:
{Name:test-preload-230809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-230809 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1101 23:09:02.341335 127145 out.go:177] * Starting control plane node test-preload-230809 in cluster test-preload-230809
I1101 23:09:02.342819 127145 cache.go:120] Beginning downloading kic base image for docker with containerd
I1101 23:09:02.344289 127145 out.go:177] * Pulling base image ...
I1101 23:09:02.345773 127145 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1101 23:09:02.345854 127145 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
I1101 23:09:02.367470 127145 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
I1101 23:09:02.367494 127145 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
I1101 23:09:02.456956 127145 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
I1101 23:09:02.456979 127145 cache.go:57] Caching tarball of preloaded images
I1101 23:09:02.457299 127145 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1101 23:09:02.459387 127145 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
I1101 23:09:02.460985 127145 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1101 23:09:02.574127 127145 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
I1101 23:09:07.458996 127145 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1101 23:09:07.459100 127145 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
I1101 23:09:08.389256 127145 cache.go:60] Finished verifying existence of preloaded tar for v1.24.6 on containerd
I1101 23:09:08.389384 127145 profile.go:148] Saving config to /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/config.json ...
I1101 23:09:08.389578 127145 cache.go:208] Successfully downloaded all kic artifacts
I1101 23:09:08.389617 127145 start.go:364] acquiring machines lock for test-preload-230809: {Name:mke051021b2965b04875f4fe9250ee1fc48098e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1101 23:09:08.389726 127145 start.go:368] acquired machines lock for "test-preload-230809" in 76.094µs
I1101 23:09:08.389751 127145 start.go:96] Skipping create...Using existing machine configuration
I1101 23:09:08.389762 127145 fix.go:55] fixHost starting:
I1101 23:09:08.390003 127145 cli_runner.go:164] Run: docker container inspect test-preload-230809 --format={{.State.Status}}
I1101 23:09:08.411982 127145 fix.go:103] recreateIfNeeded on test-preload-230809: state=Running err=<nil>
W1101 23:09:08.412027 127145 fix.go:129] unexpected machine state, will restart: <nil>
I1101 23:09:08.414797 127145 out.go:177] * Updating the running docker "test-preload-230809" container ...
I1101 23:09:08.416264 127145 machine.go:88] provisioning docker machine ...
I1101 23:09:08.416295 127145 ubuntu.go:169] provisioning hostname "test-preload-230809"
I1101 23:09:08.416338 127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
I1101 23:09:08.439734 127145 main.go:134] libmachine: Using SSH client type: native
I1101 23:09:08.440024 127145 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49277 <nil> <nil>}
I1101 23:09:08.440069 127145 main.go:134] libmachine: About to run SSH command:
sudo hostname test-preload-230809 && echo "test-preload-230809" | sudo tee /etc/hostname
I1101 23:09:08.562938 127145 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-230809
I1101 23:09:08.563010 127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
I1101 23:09:08.585385 127145 main.go:134] libmachine: Using SSH client type: native
I1101 23:09:08.585561 127145 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49277 <nil> <nil>}
I1101 23:09:08.585590 127145 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\stest-preload-230809' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-230809/g' /etc/hosts;
else
echo '127.0.1.1 test-preload-230809' | sudo tee -a /etc/hosts;
fi
fi
I1101 23:09:08.698901 127145 main.go:134] libmachine: SSH cmd err, output: <nil>:
I1101 23:09:08.698934 127145 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15232-6112/.minikube CaCertPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15232-6112/.minikube}
I1101 23:09:08.698966 127145 ubuntu.go:177] setting up certificates
I1101 23:09:08.698978 127145 provision.go:83] configureAuth start
I1101 23:09:08.699037 127145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-230809
I1101 23:09:08.721518 127145 provision.go:138] copyHostCerts
I1101 23:09:08.721585 127145 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-6112/.minikube/ca.pem, removing ...
I1101 23:09:08.721599 127145 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-6112/.minikube/ca.pem
I1101 23:09:08.721689 127145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15232-6112/.minikube/ca.pem (1078 bytes)
I1101 23:09:08.721805 127145 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-6112/.minikube/cert.pem, removing ...
I1101 23:09:08.721820 127145 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-6112/.minikube/cert.pem
I1101 23:09:08.721860 127145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15232-6112/.minikube/cert.pem (1123 bytes)
I1101 23:09:08.721933 127145 exec_runner.go:144] found /home/jenkins/minikube-integration/15232-6112/.minikube/key.pem, removing ...
I1101 23:09:08.721947 127145 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15232-6112/.minikube/key.pem
I1101 23:09:08.721984 127145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15232-6112/.minikube/key.pem (1675 bytes)
I1101 23:09:08.722065 127145 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15232-6112/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca-key.pem org=jenkins.test-preload-230809 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-230809]
I1101 23:09:09.342668 127145 provision.go:172] copyRemoteCerts
I1101 23:09:09.342737 127145 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1101 23:09:09.342788 127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
I1101 23:09:09.365869 127145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/test-preload-230809/id_rsa Username:docker}
I1101 23:09:09.450803 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1101 23:09:09.467332 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I1101 23:09:09.484069 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1101 23:09:09.500288 127145 provision.go:86] duration metric: configureAuth took 801.291693ms
I1101 23:09:09.500314 127145 ubuntu.go:193] setting minikube options for container-runtime
I1101 23:09:09.500489 127145 config.go:180] Loaded profile config "test-preload-230809": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
I1101 23:09:09.500504 127145 machine.go:91] provisioned docker machine in 1.084227489s
I1101 23:09:09.500512 127145 start.go:300] post-start starting for "test-preload-230809" (driver="docker")
I1101 23:09:09.500518 127145 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1101 23:09:09.500574 127145 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1101 23:09:09.500612 127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
I1101 23:09:09.523524 127145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/test-preload-230809/id_rsa Username:docker}
I1101 23:09:09.606420 127145 ssh_runner.go:195] Run: cat /etc/os-release
I1101 23:09:09.608955 127145 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1101 23:09:09.608997 127145 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1101 23:09:09.609008 127145 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1101 23:09:09.609014 127145 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I1101 23:09:09.609026 127145 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-6112/.minikube/addons for local assets ...
I1101 23:09:09.609074 127145 filesync.go:126] Scanning /home/jenkins/minikube-integration/15232-6112/.minikube/files for local assets ...
I1101 23:09:09.609141 127145 filesync.go:149] local asset: /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem -> 128402.pem in /etc/ssl/certs
I1101 23:09:09.609211 127145 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1101 23:09:09.615422 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem --> /etc/ssl/certs/128402.pem (1708 bytes)
I1101 23:09:09.632348 127145 start.go:303] post-start completed in 131.826095ms
I1101 23:09:09.632431 127145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1101 23:09:09.632484 127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
I1101 23:09:09.655572 127145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/test-preload-230809/id_rsa Username:docker}
I1101 23:09:09.739833 127145 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1101 23:09:09.743685 127145 fix.go:57] fixHost completed within 1.353918347s
I1101 23:09:09.743711 127145 start.go:83] releasing machines lock for "test-preload-230809", held for 1.353965858s
I1101 23:09:09.743793 127145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-230809
I1101 23:09:09.766548 127145 ssh_runner.go:195] Run: systemctl --version
I1101 23:09:09.766597 127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
I1101 23:09:09.766663 127145 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I1101 23:09:09.766716 127145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-230809
I1101 23:09:09.792264 127145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/test-preload-230809/id_rsa Username:docker}
I1101 23:09:09.792322 127145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49277 SSHKeyPath:/home/jenkins/minikube-integration/15232-6112/.minikube/machines/test-preload-230809/id_rsa Username:docker}
I1101 23:09:09.888741 127145 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1101 23:09:09.898412 127145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1101 23:09:09.907129 127145 docker.go:189] disabling docker service ...
I1101 23:09:09.907178 127145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1101 23:09:09.916127 127145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1101 23:09:09.924535 127145 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1101 23:09:10.021637 127145 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1101 23:09:10.121893 127145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1101 23:09:10.130949 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1101 23:09:10.143348 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
I1101 23:09:10.150803 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
I1101 23:09:10.158084 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
I1101 23:09:10.165427 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
I1101 23:09:10.172620 127145 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1101 23:09:10.178500 127145 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1101 23:09:10.184228 127145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 23:09:10.274591 127145 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1101 23:09:10.352393 127145 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
I1101 23:09:10.352463 127145 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1101 23:09:10.357122 127145 start.go:472] Will wait 60s for crictl version
I1101 23:09:10.357191 127145 ssh_runner.go:195] Run: sudo crictl version
I1101 23:09:10.392488 127145 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:
stderr:
time="2022-11-01T23:09:10Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
I1101 23:09:21.439528 127145 ssh_runner.go:195] Run: sudo crictl version
I1101 23:09:21.462449 127145 start.go:481] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.6.9
RuntimeApiVersion: v1alpha2
I1101 23:09:21.462510 127145 ssh_runner.go:195] Run: containerd --version
I1101 23:09:21.484971 127145 ssh_runner.go:195] Run: containerd --version
I1101 23:09:21.509013 127145 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.9 ...
I1101 23:09:21.510580 127145 cli_runner.go:164] Run: docker network inspect test-preload-230809 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1101 23:09:21.532621 127145 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I1101 23:09:21.536061 127145 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
I1101 23:09:21.536135 127145 ssh_runner.go:195] Run: sudo crictl images --output json
I1101 23:09:21.558771 127145 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
I1101 23:09:21.558833 127145 ssh_runner.go:195] Run: which lz4
I1101 23:09:21.561739 127145 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I1101 23:09:21.564671 127145 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I1101 23:09:21.564695 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
I1101 23:09:22.512481 127145 containerd.go:496] Took 0.950765 seconds to copy over tarball
I1101 23:09:22.512539 127145 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I1101 23:09:25.309553 127145 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.796992099s)
I1101 23:09:25.309668 127145 containerd.go:503] Took 2.797150 seconds t extract the tarball
I1101 23:09:25.309687 127145 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1101 23:09:25.324395 127145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 23:09:25.422371 127145 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1101 23:09:25.510170 127145 ssh_runner.go:195] Run: sudo crictl images --output json
I1101 23:09:25.538232 127145 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.6 k8s.gcr.io/kube-controller-manager:v1.24.6 k8s.gcr.io/kube-scheduler:v1.24.6 k8s.gcr.io/kube-proxy:v1.24.6 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
I1101 23:09:25.538307 127145 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I1101 23:09:25.538343 127145 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.6
I1101 23:09:25.538380 127145 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.6
I1101 23:09:25.538401 127145 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.6
I1101 23:09:25.538410 127145 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
I1101 23:09:25.538365 127145 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
I1101 23:09:25.538347 127145 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
I1101 23:09:25.538380 127145 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.6
I1101 23:09:25.539377 127145 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I1101 23:09:25.539486 127145 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.6: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.6
I1101 23:09:25.539520 127145 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.6: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.6
I1101 23:09:25.539552 127145 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.6: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.6
I1101 23:09:25.539747 127145 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
I1101 23:09:25.540025 127145 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
I1101 23:09:25.540223 127145 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.6: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.6
I1101 23:09:25.540448 127145 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
I1101 23:09:25.987285 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
I1101 23:09:25.999857 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
I1101 23:09:26.002925 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.6"
I1101 23:09:26.009305 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.6"
I1101 23:09:26.050246 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
I1101 23:09:26.065466 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6"
I1101 23:09:26.075511 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6"
I1101 23:09:26.363138 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
I1101 23:09:26.825611 127145 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
I1101 23:09:26.825704 127145 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
I1101 23:09:26.825763 127145 ssh_runner.go:195] Run: which crictl
I1101 23:09:26.922091 127145 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
I1101 23:09:26.922201 127145 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
I1101 23:09:26.922266 127145 ssh_runner.go:195] Run: which crictl
I1101 23:09:26.935023 127145 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.6" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.6" does not exist at hash "860f263331c9513ddab44d4d8a9a4a7304313b3aa0776decc1d7fc6acdd69ab0" in container runtime
I1101 23:09:26.935049 127145 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.6" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.6" does not exist at hash "0bb39497ab33bb5f8aaff88ced53a5fcd360fcf5da647609619d4f5c8f1483d2" in container runtime
I1101 23:09:26.935073 127145 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.6
I1101 23:09:26.935157 127145 ssh_runner.go:195] Run: which crictl
I1101 23:09:26.935073 127145 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.6
I1101 23:09:26.935237 127145 ssh_runner.go:195] Run: which crictl
I1101 23:09:27.033281 127145 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
I1101 23:09:27.033386 127145 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
I1101 23:09:27.033448 127145 ssh_runner.go:195] Run: which crictl
I1101 23:09:27.118607 127145 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.6": (1.053106276s)
I1101 23:09:27.197931 127145 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.6" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.6" does not exist at hash "c786c777a4e1c21907e77042428837645fa382d3bd14925cf78f0d163d6d332e" in container runtime
I1101 23:09:27.118727 127145 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.6": (1.043182812s)
I1101 23:09:27.145553 127145 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
I1101 23:09:27.198012 127145 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.6
I1101 23:09:27.198041 127145 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I1101 23:09:27.198067 127145 ssh_runner.go:195] Run: which crictl
I1101 23:09:27.198114 127145 ssh_runner.go:195] Run: which crictl
I1101 23:09:27.145664 127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
I1101 23:09:27.145702 127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
I1101 23:09:27.145736 127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6
I1101 23:09:27.145736 127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6
I1101 23:09:27.145776 127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
I1101 23:09:27.197981 127145 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.6" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.6" does not exist at hash "c6c20157a42337ecb7675be59e1dc34bc5a91288c7eeac1e30ec97767a9055eb" in container runtime
I1101 23:09:27.198282 127145 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.6
I1101 23:09:27.198319 127145 ssh_runner.go:195] Run: which crictl
I1101 23:09:28.633346 127145 ssh_runner.go:235] Completed: which crictl: (1.435002706s)
I1101 23:09:28.633407 127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.6
I1101 23:09:28.633499 127145 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.6: (1.435244347s)
I1101 23:09:28.633520 127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6
I1101 23:09:28.633558 127145 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0: (1.435295917s)
I1101 23:09:28.633570 127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
I1101 23:09:28.633630 127145 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
I1101 23:09:28.633718 127145 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6: (1.435492576s)
I1101 23:09:28.633737 127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
I1101 23:09:28.633801 127145 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
I1101 23:09:28.633883 127145 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.6: (1.435647522s)
I1101 23:09:28.633895 127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.6
I1101 23:09:28.633934 127145 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7: (1.43573031s)
I1101 23:09:28.633961 127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
I1101 23:09:28.633997 127145 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
I1101 23:09:28.634036 127145 ssh_runner.go:235] Completed: which crictl: (1.435871833s)
I1101 23:09:28.634053 127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1101 23:09:28.634098 127145 ssh_runner.go:235] Completed: which crictl: (1.436023391s)
I1101 23:09:28.634122 127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.6
I1101 23:09:28.778449 127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
I1101 23:09:28.778478 127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.6
I1101 23:09:28.778546 127145 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
I1101 23:09:28.778569 127145 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
I1101 23:09:28.778584 127145 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
I1101 23:09:28.778593 127145 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
I1101 23:09:28.778618 127145 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
I1101 23:09:28.778652 127145 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
I1101 23:09:28.779903 127145 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.6
I1101 23:09:28.781996 127145 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
I1101 23:09:36.182104 127145 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (7.403463536s)
I1101 23:09:36.182144 127145 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
I1101 23:09:36.182176 127145 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.8.6
I1101 23:09:36.182237 127145 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
I1101 23:09:38.315093 127145 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6: (2.132819455s)
I1101 23:09:38.315128 127145 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
I1101 23:09:38.315167 127145 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.7
I1101 23:09:38.315245 127145 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
I1101 23:09:38.532314 127145 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
I1101 23:09:38.532357 127145 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I1101 23:09:38.532411 127145 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
I1101 23:09:39.739922 127145 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5: (1.207479048s)
I1101 23:09:39.739955 127145 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I1101 23:09:39.740004 127145 cache_images.go:92] LoadImages completed in 14.201748543s
W1101 23:09:39.740191 127145 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15232-6112/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.6: no such file or directory
I1101 23:09:39.740259 127145 ssh_runner.go:195] Run: sudo crictl info
I1101 23:09:39.816714 127145 cni.go:95] Creating CNI manager for ""
I1101 23:09:39.816751 127145 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I1101 23:09:39.816770 127145 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1101 23:09:39.816787 127145 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-230809 NodeName:test-preload-230809 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I1101 23:09:39.816973 127145 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "test-preload-230809"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1101 23:09:39.817109 127145 kubeadm.go:962] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-230809 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.6 ClusterName:test-preload-230809 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1101 23:09:39.817179 127145 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
I1101 23:09:39.826621 127145 binaries.go:44] Found k8s binaries, skipping transfer
I1101 23:09:39.826677 127145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1101 23:09:39.835648 127145 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (512 bytes)
I1101 23:09:39.916772 127145 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1101 23:09:39.932259 127145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
I1101 23:09:39.947304 127145 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I1101 23:09:39.950835 127145 certs.go:54] Setting up /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809 for IP: 192.168.67.2
I1101 23:09:39.950959 127145 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15232-6112/.minikube/ca.key
I1101 23:09:39.951010 127145 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15232-6112/.minikube/proxy-client-ca.key
I1101 23:09:39.951103 127145 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/client.key
I1101 23:09:39.951220 127145 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/apiserver.key.c7fa3a9e
I1101 23:09:39.951278 127145 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/proxy-client.key
I1101 23:09:39.951418 127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/12840.pem (1338 bytes)
W1101 23:09:39.951461 127145 certs.go:384] ignoring /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/12840_empty.pem, impossibly tiny 0 bytes
I1101 23:09:39.951476 127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca-key.pem (1679 bytes)
I1101 23:09:39.951510 127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/ca.pem (1078 bytes)
I1101 23:09:39.951551 127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/cert.pem (1123 bytes)
I1101 23:09:39.951584 127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/certs/home/jenkins/minikube-integration/15232-6112/.minikube/certs/key.pem (1675 bytes)
I1101 23:09:39.951640 127145 certs.go:388] found cert: /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem (1708 bytes)
I1101 23:09:39.952459 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1101 23:09:40.018330 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1101 23:09:40.038985 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1101 23:09:40.059337 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1101 23:09:40.127519 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1101 23:09:40.147768 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1101 23:09:40.216763 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1101 23:09:40.238171 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1101 23:09:40.265559 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/certs/12840.pem --> /usr/share/ca-certificates/12840.pem (1338 bytes)
I1101 23:09:40.332847 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/files/etc/ssl/certs/128402.pem --> /usr/share/ca-certificates/128402.pem (1708 bytes)
I1101 23:09:40.354317 127145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15232-6112/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1101 23:09:40.414264 127145 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1101 23:09:40.430591 127145 ssh_runner.go:195] Run: openssl version
I1101 23:09:40.436602 127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12840.pem && ln -fs /usr/share/ca-certificates/12840.pem /etc/ssl/certs/12840.pem"
I1101 23:09:40.445840 127145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12840.pem
I1101 23:09:40.449377 127145 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov 1 22:50 /usr/share/ca-certificates/12840.pem
I1101 23:09:40.449430 127145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12840.pem
I1101 23:09:40.456569 127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12840.pem /etc/ssl/certs/51391683.0"
I1101 23:09:40.464390 127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128402.pem && ln -fs /usr/share/ca-certificates/128402.pem /etc/ssl/certs/128402.pem"
I1101 23:09:40.514612 127145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128402.pem
I1101 23:09:40.518320 127145 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov 1 22:50 /usr/share/ca-certificates/128402.pem
I1101 23:09:40.518385 127145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128402.pem
I1101 23:09:40.524764 127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128402.pem /etc/ssl/certs/3ec20f2e.0"
I1101 23:09:40.533275 127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1101 23:09:40.542165 127145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1101 23:09:40.545871 127145 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov 1 22:46 /usr/share/ca-certificates/minikubeCA.pem
I1101 23:09:40.545917 127145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1101 23:09:40.550867 127145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1101 23:09:40.558550 127145 kubeadm.go:396] StartCluster: {Name:test-preload-230809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-230809 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1101 23:09:40.558652 127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1101 23:09:40.558703 127145 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1101 23:09:40.637065 127145 cri.go:87] found id: "e6d03d027c4d1a3003df82cf9d4d3ecff8142ccbfb55e98924ae237bb4f1228c"
I1101 23:09:40.637096 127145 cri.go:87] found id: "514280d446c28719c4288d8933d306bc638eb138421419ac3e8c984443017720"
I1101 23:09:40.637108 127145 cri.go:87] found id: "afef2fbc253ad82a2dfd9afacb362da4a20ca08f1f9d6008377794501540f11a"
I1101 23:09:40.637121 127145 cri.go:87] found id: "dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8"
I1101 23:09:40.637131 127145 cri.go:87] found id: ""
I1101 23:09:40.637166 127145 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I1101 23:09:40.735629 127145 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"12f63aa1ca7d1ffe1d6116a2508352b1aa495e5ddea1b61f00d22cbd3da01cb5","pid":2624,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12f63aa1ca7d1ffe1d6116a2508352b1aa495e5ddea1b61f00d22cbd3da01cb5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12f63aa1ca7d1ffe1d6116a2508352b1aa495e5ddea1b61f00d22cbd3da01cb5/rootfs","created":"2022-11-01T23:08:58.356227997Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1","pid":2147,"st
atus":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1/rootfs","created":"2022-11-01T23:08:50.712751348Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-55wll_18a63bc3-b29d-45a5-98a8-3f37cfef3c7b","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-55wll","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424","pid":1508,"status":
"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424/rootfs","created":"2022-11-01T23:08:30.466593305Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-230809_37b967577315f9064699b525aec41d0d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62","pid":2189,"status"
:"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62/rootfs","created":"2022-11-01T23:08:50.775829242Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-mprfx_c323cc25-2fa6-4edf-b36c-03da66892a50","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-mprfx","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6d7033082e21d9b2838c0b0308241f7d2ab87fb855475429560ba43a20e96468","pid":1631,"status":"running","b
undle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d7033082e21d9b2838c0b0308241f7d2ab87fb855475429560ba43a20e96468","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d7033082e21d9b2838c0b0308241f7d2ab87fb855475429560ba43a20e96468/rootfs","created":"2022-11-01T23:08:30.715212813Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-scheduler:v1.24.4","io.kubernetes.cri.sandbox-id":"cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7ee8c6a3f397fe26e378812cf2775f1f109687544f2146de5fc5439dd7178994","pid":2246,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ee8c6a3f397fe26e378812cf2775f1f109687544f2146de5fc5439dd7178994","rootfs":"/run/containerd/io.containerd.run
time.v2.task/k8s.io/7ee8c6a3f397fe26e378812cf2775f1f109687544f2146de5fc5439dd7178994/rootfs","created":"2022-11-01T23:08:50.930366595Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-proxy:v1.24.4","io.kubernetes.cri.sandbox-id":"57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62","io.kubernetes.cri.sandbox-name":"kube-proxy-mprfx","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931","pid":3276,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931/rootfs","created":"2022-11-01T23:09:28.020513803Z","annotations":{"io.kubernetes.cri.container-type":"sandbox",
"io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-230809_bfce36eaaffbf2f7db1c9f4256edcaf8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45","pid":2566,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45/rootfs","created":"2022-11-01T23:08:58.223128026Z","annotations":{"io.kubernetes.cri.conta
iner-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-r4qft_93ea1e43-1509-4751-a91c-ee8a9f43f870","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-r4qft","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1","pid":3285,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1/rootfs","created":"2022-11-01T23:09:28.02269692Z","annotations":{"io.kubernet
es.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-230809_9ccdbc12c48dbd243a9d0335dcf93bfa","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463","pid":3536,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463/rootfs","created":"2022-11-01T23:09:29.
630532491Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-230809_440b295b0419a8945c07a1ed44f1a55e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9ded41f184a166bebee6a50d64b0b367809e660f89c0b1ce2a62444c6e4b41be","pid":2426,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ded41f184a166bebee6a50d64b0b367809e660f89c0b1ce2a62444c6e4b41be","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ded41f184a166bebee6a50d64b0b367809e660f89c0b1ce2a62444c6e4b41be/rootfs","created":
"2022-11-01T23:08:54.212636774Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20221004-44d545d1","io.kubernetes.cri.sandbox-id":"25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1","io.kubernetes.cri.sandbox-name":"kindnet-55wll","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8","pid":1503,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8/rootfs","created":"2022-11-01T23:08:30.4665045Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","
io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-230809_440b295b0419a8945c07a1ed44f1a55e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05","pid":3584,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05/rootfs","created":"2022-11-01T23:09:29.729675697Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.san
dbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-r4qft_93ea1e43-1509-4751-a91c-ee8a9f43f870","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-r4qft","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad","pid":1507,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad/rootfs","created":"2022-11-01T23:08:30.46654145Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubern
etes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-230809_bfce36eaaffbf2f7db1c9f4256edcaf8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd172ea470fc40ad874a284744d5a1a13d57e995841c396c38486a6f33c6f8d6","pid":2623,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd172ea470fc40ad874a284744d5a1a13d57e995841c396c38486a6f33c6f8d6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd172ea470fc40ad874a284744d5a1a13d57e995841c396c38486a6f33c6f8d6/rootfs","created":"2022-11-01T23:08:58.356220401Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"c
ontainer","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-r4qft","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"da4bf7bca87139a71ef6f0a31575ff28a6859f2aed2ec16dcfa66747379cf74a","pid":1630,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da4bf7bca87139a71ef6f0a31575ff28a6859f2aed2ec16dcfa66747379cf74a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da4bf7bca87139a71ef6f0a31575ff28a6859f2aed2ec16dcfa66747379cf74a/rootfs","created":"2022-11-01T23:08:30.715566758Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-apiserver:v1.24.4","io.kubernetes.cri.sandbox-id":"bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8","io.k
ubernetes.cri.sandbox-name":"kube-apiserver-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dab9fa14795a59ae797dda31ce5ba435a42d6527260c2ca0eecb6fce88c92b16","pid":1633,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dab9fa14795a59ae797dda31ce5ba435a42d6527260c2ca0eecb6fce88c92b16","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dab9fa14795a59ae797dda31ce5ba435a42d6527260c2ca0eecb6fce88c92b16/rootfs","created":"2022-11-01T23:08:30.71207489Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-controller-manager:v1.24.4","io.kubernetes.cri.sandbox-id":"e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersi
on":"1.0.2-dev","id":"dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8","pid":3660,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8/rootfs","created":"2022-11-01T23:09:31.863802538Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/kube-proxy:v1.24.4","io.kubernetes.cri.sandbox-id":"f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce","io.kubernetes.cri.sandbox-name":"kube-proxy-mprfx","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dc88b2919fcdf18151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7","pid":3466,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc88b2919fcdf18
151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc88b2919fcdf18151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7/rootfs","created":"2022-11-01T23:09:29.524514538Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"dc88b2919fcdf18151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-230809_37b967577315f9064699b525aec41d0d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f","pid":1504,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1a311b6963f69
909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f/rootfs","created":"2022-11-01T23:08:30.466601473Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-230809_9ccdbc12c48dbd243a9d0335dcf93bfa","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e573179385efbfc5589bf55166de007377f175e59fa595f7337fe85674e6e993","pid":1632,"status":"running","bundle":"/run/containerd/io.container
d.runtime.v2.task/k8s.io/e573179385efbfc5589bf55166de007377f175e59fa595f7337fe85674e6e993","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e573179385efbfc5589bf55166de007377f175e59fa595f7337fe85674e6e993/rootfs","created":"2022-11-01T23:08:30.715174165Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri.sandbox-id":"4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424","io.kubernetes.cri.sandbox-name":"etcd-test-preload-230809","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a524265b0003fa3f0aa","pid":3538,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a524265b0003fa3f0aa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a
524265b0003fa3f0aa/rootfs","created":"2022-11-01T23:09:29.63434432Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a524265b0003fa3f0aa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-55wll_18a63bc3-b29d-45a5-98a8-3f37cfef3c7b","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-55wll","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460b949272bba5","pid":3546,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460b949272bba5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460
b949272bba5/rootfs","created":"2022-11-01T23:09:29.633496847Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460b949272bba5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_2eb4b78f-b029-431c-a5b6-34253c21c6ae","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce","pid":3283,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d
9cce/rootfs","created":"2022-11-01T23:09:28.022341914Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-mprfx_c323cc25-2fa6-4edf-b36c-03da66892a50","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-mprfx","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1","pid":2565,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1/rootfs",
"created":"2022-11-01T23:08:58.221992861Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_2eb4b78f-b029-431c-a5b6-34253c21c6ae","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"}]
I1101 23:09:40.736083 127145 cri.go:124] list returned 25 containers
I1101 23:09:40.736101 127145 cri.go:127] container: {ID:12f63aa1ca7d1ffe1d6116a2508352b1aa495e5ddea1b61f00d22cbd3da01cb5 Status:running}
I1101 23:09:40.736119 127145 cri.go:129] skipping 12f63aa1ca7d1ffe1d6116a2508352b1aa495e5ddea1b61f00d22cbd3da01cb5 - not in ps
I1101 23:09:40.736130 127145 cri.go:127] container: {ID:25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1 Status:running}
I1101 23:09:40.736144 127145 cri.go:129] skipping 25ce3a72b2a91bc7814fdb008d3d585014ccb8aa2efc778dbff383dc9a8389a1 - not in ps
I1101 23:09:40.736156 127145 cri.go:127] container: {ID:4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424 Status:running}
I1101 23:09:40.736169 127145 cri.go:129] skipping 4be478d3f5ddfd8992e97b9386a7e5af4530f4d5a7613818be81fef579a8b424 - not in ps
I1101 23:09:40.736180 127145 cri.go:127] container: {ID:57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62 Status:running}
I1101 23:09:40.736192 127145 cri.go:129] skipping 57dd7937ba40dcb1fddb3b376ab9f017ca259ddc50941b8f29df795af24dfe62 - not in ps
I1101 23:09:40.736204 127145 cri.go:127] container: {ID:6d7033082e21d9b2838c0b0308241f7d2ab87fb855475429560ba43a20e96468 Status:running}
I1101 23:09:40.736221 127145 cri.go:129] skipping 6d7033082e21d9b2838c0b0308241f7d2ab87fb855475429560ba43a20e96468 - not in ps
I1101 23:09:40.736232 127145 cri.go:127] container: {ID:7ee8c6a3f397fe26e378812cf2775f1f109687544f2146de5fc5439dd7178994 Status:running}
I1101 23:09:40.736240 127145 cri.go:129] skipping 7ee8c6a3f397fe26e378812cf2775f1f109687544f2146de5fc5439dd7178994 - not in ps
I1101 23:09:40.736246 127145 cri.go:127] container: {ID:84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931 Status:running}
I1101 23:09:40.736255 127145 cri.go:129] skipping 84ae15daeb231c8b3480cb3004ff56b35ddb88d5a87537f294e2b4d4988f4931 - not in ps
I1101 23:09:40.736266 127145 cri.go:127] container: {ID:8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45 Status:running}
I1101 23:09:40.736278 127145 cri.go:129] skipping 8da6f9cb4aac1125e616bcdbca433f8589dcb2383fc29268e12b959fde822f45 - not in ps
I1101 23:09:40.736289 127145 cri.go:127] container: {ID:969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1 Status:running}
I1101 23:09:40.736300 127145 cri.go:129] skipping 969e90316f417fb98e838e2d813889c10fd1cd3db31eb96466d472d9aa7f4ad1 - not in ps
I1101 23:09:40.736305 127145 cri.go:127] container: {ID:9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463 Status:running}
I1101 23:09:40.736313 127145 cri.go:129] skipping 9d5aacbd5fc4113b2b3755390c8f08ad8567a97f923f72afc8223995c3d28463 - not in ps
I1101 23:09:40.736320 127145 cri.go:127] container: {ID:9ded41f184a166bebee6a50d64b0b367809e660f89c0b1ce2a62444c6e4b41be Status:running}
I1101 23:09:40.736333 127145 cri.go:129] skipping 9ded41f184a166bebee6a50d64b0b367809e660f89c0b1ce2a62444c6e4b41be - not in ps
I1101 23:09:40.736343 127145 cri.go:127] container: {ID:bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8 Status:running}
I1101 23:09:40.736355 127145 cri.go:129] skipping bdc61da28213743fdcac2ea67d153f508512cf73caf961823e84e509f9491bb8 - not in ps
I1101 23:09:40.736366 127145 cri.go:127] container: {ID:c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05 Status:running}
I1101 23:09:40.736378 127145 cri.go:129] skipping c191c8e76a1ed8619eb5b740fbc3256c5071e53940197f92775664452c7d0a05 - not in ps
I1101 23:09:40.736388 127145 cri.go:127] container: {ID:cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad Status:running}
I1101 23:09:40.736397 127145 cri.go:129] skipping cd132afd164bb8c8913a8d2c33b026c75aefaf7ea8c8bf68b250d22bb988d9ad - not in ps
I1101 23:09:40.736411 127145 cri.go:127] container: {ID:cd172ea470fc40ad874a284744d5a1a13d57e995841c396c38486a6f33c6f8d6 Status:running}
I1101 23:09:40.736429 127145 cri.go:129] skipping cd172ea470fc40ad874a284744d5a1a13d57e995841c396c38486a6f33c6f8d6 - not in ps
I1101 23:09:40.736440 127145 cri.go:127] container: {ID:da4bf7bca87139a71ef6f0a31575ff28a6859f2aed2ec16dcfa66747379cf74a Status:running}
I1101 23:09:40.736458 127145 cri.go:129] skipping da4bf7bca87139a71ef6f0a31575ff28a6859f2aed2ec16dcfa66747379cf74a - not in ps
I1101 23:09:40.736470 127145 cri.go:127] container: {ID:dab9fa14795a59ae797dda31ce5ba435a42d6527260c2ca0eecb6fce88c92b16 Status:running}
I1101 23:09:40.736483 127145 cri.go:129] skipping dab9fa14795a59ae797dda31ce5ba435a42d6527260c2ca0eecb6fce88c92b16 - not in ps
I1101 23:09:40.736493 127145 cri.go:127] container: {ID:dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8 Status:running}
I1101 23:09:40.736502 127145 cri.go:133] skipping {dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8 running}: state = "running", want "paused"
I1101 23:09:40.736517 127145 cri.go:127] container: {ID:dc88b2919fcdf18151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7 Status:running}
I1101 23:09:40.736530 127145 cri.go:129] skipping dc88b2919fcdf18151bb12713a9a522567cac03cbc7678ae46060b28b236b0a7 - not in ps
I1101 23:09:40.736541 127145 cri.go:127] container: {ID:e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f Status:running}
I1101 23:09:40.736553 127145 cri.go:129] skipping e1a311b6963f69909afd3a608b2ed073814f7bea914310231e1b05f415d6c21f - not in ps
I1101 23:09:40.736564 127145 cri.go:127] container: {ID:e573179385efbfc5589bf55166de007377f175e59fa595f7337fe85674e6e993 Status:running}
I1101 23:09:40.736576 127145 cri.go:129] skipping e573179385efbfc5589bf55166de007377f175e59fa595f7337fe85674e6e993 - not in ps
I1101 23:09:40.736586 127145 cri.go:127] container: {ID:ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a524265b0003fa3f0aa Status:running}
I1101 23:09:40.736594 127145 cri.go:129] skipping ec92c9656bd03e2e09047e3835dd70599b2ad5c20ff57a524265b0003fa3f0aa - not in ps
I1101 23:09:40.736603 127145 cri.go:127] container: {ID:f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460b949272bba5 Status:running}
I1101 23:09:40.736615 127145 cri.go:129] skipping f4a2c60e58c4abb795848e5266efe8789701759ceb857cf86b460b949272bba5 - not in ps
I1101 23:09:40.736625 127145 cri.go:127] container: {ID:f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce Status:running}
I1101 23:09:40.736636 127145 cri.go:129] skipping f70b5d39758a4a5950abf9349ee41f7416ec61771493266fb81f2ee5959d9cce - not in ps
I1101 23:09:40.736643 127145 cri.go:127] container: {ID:f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1 Status:running}
I1101 23:09:40.736658 127145 cri.go:129] skipping f7bafd00df35e5156ebcd0619ba12cc59780327713c7664056c4c615561e2ac1 - not in ps
I1101 23:09:40.736704 127145 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1101 23:09:40.745646 127145 kubeadm.go:411] found existing configuration files, will attempt cluster restart
I1101 23:09:40.745673 127145 kubeadm.go:627] restartCluster start
I1101 23:09:40.745722 127145 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1101 23:09:40.753726 127145 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1101 23:09:40.754368 127145 kubeconfig.go:92] found "test-preload-230809" server: "https://192.168.67.2:8443"
I1101 23:09:40.755237 127145 kapi.go:59] client config for test-preload-230809: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/client.crt", KeyFile:"/home/jenkins/minikube-integration/15232-6112/.minikube/profiles/test-preload-230809/client.key", CAFile:"/home/jenkins/minikube-integration/15232-6112/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786820), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1101 23:09:40.755875 127145 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1101 23:09:40.763523 127145 kubeadm.go:594] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml 2022-11-01 23:08:26.955661256 +0000
+++ /var/tmp/minikube/kubeadm.yaml.new 2022-11-01 23:09:39.941360162 +0000
@@ -38,7 +38,7 @@
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
-kubernetesVersion: v1.24.4
+kubernetesVersion: v1.24.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
-- /stdout --
I1101 23:09:40.763543 127145 kubeadm.go:1114] stopping kube-system containers ...
I1101 23:09:40.763556 127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I1101 23:09:40.763603 127145 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1101 23:09:40.843646 127145 cri.go:87] found id: "e6d03d027c4d1a3003df82cf9d4d3ecff8142ccbfb55e98924ae237bb4f1228c"
I1101 23:09:40.843681 127145 cri.go:87] found id: "514280d446c28719c4288d8933d306bc638eb138421419ac3e8c984443017720"
I1101 23:09:40.843693 127145 cri.go:87] found id: "afef2fbc253ad82a2dfd9afacb362da4a20ca08f1f9d6008377794501540f11a"
I1101 23:09:40.843703 127145 cri.go:87] found id: "dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8"
I1101 23:09:40.843711 127145 cri.go:87] found id: ""
I1101 23:09:40.843719 127145 cri.go:232] Stopping containers: [e6d03d027c4d1a3003df82cf9d4d3ecff8142ccbfb55e98924ae237bb4f1228c 514280d446c28719c4288d8933d306bc638eb138421419ac3e8c984443017720 afef2fbc253ad82a2dfd9afacb362da4a20ca08f1f9d6008377794501540f11a dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8]
I1101 23:09:40.843770 127145 ssh_runner.go:195] Run: which crictl
I1101 23:09:40.847856 127145 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop e6d03d027c4d1a3003df82cf9d4d3ecff8142ccbfb55e98924ae237bb4f1228c 514280d446c28719c4288d8933d306bc638eb138421419ac3e8c984443017720 afef2fbc253ad82a2dfd9afacb362da4a20ca08f1f9d6008377794501540f11a dc0b884d20890cca1819bc9d3812f17cf4474794a5e277bd959127330b7ec3b8
I1101 23:09:41.335259 127145 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I1101 23:09:41.402860 127145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1101 23:09:41.410490 127145 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5639 Nov 1 23:08 /etc/kubernetes/admin.conf
-rw------- 1 root root 5656 Nov 1 23:08 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2015 Nov 1 23:08 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5600 Nov 1 23:08 /etc/kubernetes/scheduler.conf
I1101 23:09:41.410554 127145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1101 23:09:41.417229 127145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1101 23:09:41.423830 127145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1101 23:09:41.430364 127145 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I1101 23:09:41.430410 127145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1101 23:09:41.436788 127145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1101 23:09:41.442864 127145 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I1101 23:09:41.442915 127145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1101 23:09:41.448988 127145 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1101 23:09:41.455288 127145 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I1101 23:09:41.455307 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I1101 23:09:41.753172 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I1101 23:09:42.645331 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I1101 23:09:43.006957 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I1101 23:09:43.058116 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I1101 23:09:43.137338 127145 api_server.go:51] waiting for apiserver process to appear ...
I1101 23:09:43.137438 127145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 23:09:43.218088 127145 api_server.go:71] duration metric: took 80.740751ms to wait for apiserver process to appear ...
I1101 23:09:43.218119 127145 api_server.go:87] waiting for apiserver healthz status ...
I1101 23:09:43.218133 127145 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I1101 23:09:43.223783 127145 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
ok
I1101 23:09:43.231489 127145 api_server.go:140] control plane version: v1.24.4
W1101 23:09:43.231532 127145 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1101 23:09:43.733092 127145 api_server.go:140] control plane version: v1.24.4
W1101 23:09:43.733125 127145 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1101 23:09:44.233705 127145 api_server.go:140] control plane version: v1.24.4
W1101 23:09:44.233731 127145 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1101 23:09:44.733150 127145 api_server.go:140] control plane version: v1.24.4
W1101 23:09:44.733179 127145 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
I1101 23:09:45.233717 127145 api_server.go:140] control plane version: v1.24.4
W1101 23:09:45.233749 127145 api_server.go:120] api server version match failed: controlPane = "v1.24.4", expected: "v1.24.6"
W1101 23:09:45.732040 127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1101 23:09:46.233010 127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1101 23:09:46.732501 127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1101 23:09:47.232636 127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1101 23:09:47.732455 127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1101 23:09:48.232934 127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1101 23:09:48.732964 127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
W1101 23:09:49.232994 127145 api_server.go:120] api server version match failed: server version: Get "https://192.168.67.2:8443/version": dial tcp 192.168.67.2:8443: connect: connection refused
I1101 23:09:52.022667 127145 api_server.go:140] control plane version: v1.24.6
I1101 23:09:52.022755 127145 api_server.go:130] duration metric: took 8.804626822s to wait for apiserver health ...
I1101 23:09:52.022776 127145 cni.go:95] Creating CNI manager for ""
I1101 23:09:52.022793 127145 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I1101 23:09:52.025189 127145 out.go:177] * Configuring CNI (Container Networking Interface) ...
I1101 23:09:52.026860 127145 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I1101 23:09:52.033655 127145 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.6/kubectl ...
I1101 23:09:52.033680 127145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I1101 23:09:52.223817 127145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1101 23:09:52.990696 127145 system_pods.go:43] waiting for kube-system pods to appear ...
I1101 23:09:52.997505 127145 system_pods.go:59] 8 kube-system pods found
I1101 23:09:52.997541 127145 system_pods.go:61] "coredns-6d4b75cb6d-r4qft" [93ea1e43-1509-4751-a91c-ee8a9f43f870] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1101 23:09:52.997551 127145 system_pods.go:61] "etcd-test-preload-230809" [af6823c1-4191-4b7b-b864-c8d4dc5b60b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1101 23:09:52.997561 127145 system_pods.go:61] "kindnet-55wll" [18a63bc3-b29d-45a5-98a8-3f37cfef3c7b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I1101 23:09:52.997568 127145 system_pods.go:61] "kube-apiserver-test-preload-230809" [7c4baec2-c5b0-4a19-b41f-c54723a6cb9d] Pending
I1101 23:09:52.997578 127145 system_pods.go:61] "kube-controller-manager-test-preload-230809" [61a6d202-4552-4719-bfd5-7e9295cc25b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1101 23:09:52.997598 127145 system_pods.go:61] "kube-proxy-mprfx" [c323cc25-2fa6-4edf-b36c-03da66892a50] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1101 23:09:52.997611 127145 system_pods.go:61] "kube-scheduler-test-preload-230809" [ae2815cc-6736-4e49-b3c8-8abeaeeea1bd] Pending
I1101 23:09:52.997623 127145 system_pods.go:61] "storage-provisioner" [2eb4b78f-b029-431c-a5b6-34253c21c6ae] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1101 23:09:52.997635 127145 system_pods.go:74] duration metric: took 6.918381ms to wait for pod list to return data ...
I1101 23:09:52.997648 127145 node_conditions.go:102] verifying NodePressure condition ...
I1101 23:09:52.999970 127145 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1101 23:09:53.000003 127145 node_conditions.go:123] node cpu capacity is 8
I1101 23:09:53.000015 127145 node_conditions.go:105] duration metric: took 2.358425ms to run NodePressure ...
I1101 23:09:53.000039 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I1101 23:09:53.234562 127145 kubeadm.go:763] waiting for restarted kubelet to initialise ...
I1101 23:09:53.237990 127145 kubeadm.go:778] kubelet initialised
I1101 23:09:53.238014 127145 kubeadm.go:779] duration metric: took 3.422089ms waiting for restarted kubelet to initialise ...
I1101 23:09:53.238022 127145 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1101 23:09:53.242529 127145 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-r4qft" in "kube-system" namespace to be "Ready" ...
I1101 23:09:55.254763 127145 pod_ready.go:102] pod "coredns-6d4b75cb6d-r4qft" in "kube-system" namespace has status "Ready":"False"
I1101 23:09:57.753901 127145 pod_ready.go:102] pod "coredns-6d4b75cb6d-r4qft" in "kube-system" namespace has status "Ready":"False"
I1101 23:09:59.754592 127145 pod_ready.go:92] pod "coredns-6d4b75cb6d-r4qft" in "kube-system" namespace has status "Ready":"True"
I1101 23:09:59.754626 127145 pod_ready.go:81] duration metric: took 6.512068179s waiting for pod "coredns-6d4b75cb6d-r4qft" in "kube-system" namespace to be "Ready" ...
I1101 23:09:59.754639 127145 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-230809" in "kube-system" namespace to be "Ready" ...
I1101 23:10:01.766834 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:04.264410 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:06.764726 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:09.264989 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:11.265205 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:13.763952 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:15.764164 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:17.764732 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:19.764997 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:22.264415 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:24.764449 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:27.264094 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:29.264748 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:31.764914 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:34.264280 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:36.264981 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:38.765185 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:41.265088 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:43.764636 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:46.265617 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:48.765111 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:51.264670 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:53.264916 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:55.264961 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:57.265052 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:10:59.764621 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:02.264841 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:04.264932 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:06.764687 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:09.265413 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:11.764819 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:13.765227 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:16.264738 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:18.265154 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:20.764475 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:22.765142 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:25.264490 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:27.265182 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:29.764395 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:31.764559 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:33.765136 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:36.264759 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:38.265094 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:40.764500 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:43.264843 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:45.765686 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:48.264476 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:50.764617 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:52.764701 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:54.765115 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:56.765316 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:11:59.264346 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:01.264372 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:03.264546 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:05.264956 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:07.764171 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:09.764397 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:11.765095 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:14.264701 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:16.265440 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:18.764276 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:20.764938 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:23.265330 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:25.764449 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:27.764895 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:30.265410 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:32.767373 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:35.265081 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:37.765063 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:40.265350 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:42.765270 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:45.265267 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:47.765107 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:50.265576 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:52.766477 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:55.264930 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:12:57.765153 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:00.264148 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:02.264609 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:04.265195 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:06.764397 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:08.765157 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:11.264073 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:13.264819 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:15.763483 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:17.763881 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:19.765072 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:21.765183 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:24.265085 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:26.764936 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:29.264520 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:31.265339 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:33.764859 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:36.265232 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:38.764507 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:40.764906 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:42.764962 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:44.765506 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:47.264257 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:49.265001 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:51.765200 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:54.264162 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:56.264864 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:58.764509 127145 pod_ready.go:102] pod "etcd-test-preload-230809" in "kube-system" namespace has status "Ready":"False"
I1101 23:13:59.759267 127145 pod_ready.go:81] duration metric: took 4m0.004604004s waiting for pod "etcd-test-preload-230809" in "kube-system" namespace to be "Ready" ...
E1101 23:13:59.759292 127145 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "etcd-test-preload-230809" in "kube-system" namespace to be "Ready" (will not retry!)
I1101 23:13:59.759322 127145 pod_ready.go:38] duration metric: took 4m6.521288423s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1101 23:13:59.759354 127145 kubeadm.go:631] restartCluster took 4m19.013673069s
W1101 23:13:59.759521 127145 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
I1101 23:13:59.759560 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1101 23:14:01.430467 127145 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.670884606s)
I1101 23:14:01.430528 127145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1101 23:14:01.440216 127145 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1101 23:14:01.447136 127145 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1101 23:14:01.447183 127145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1101 23:14:01.453660 127145 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1101 23:14:01.453703 127145 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1101 23:14:01.491674 127145 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
I1101 23:14:01.491746 127145 kubeadm.go:317] [preflight] Running pre-flight checks
I1101 23:14:01.518815 127145 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
I1101 23:14:01.518891 127145 kubeadm.go:317] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
I1101 23:14:01.518924 127145 kubeadm.go:317] [0;37mOS[0m: [0;32mLinux[0m
I1101 23:14:01.519001 127145 kubeadm.go:317] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1101 23:14:01.519091 127145 kubeadm.go:317] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1101 23:14:01.519162 127145 kubeadm.go:317] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1101 23:14:01.519232 127145 kubeadm.go:317] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1101 23:14:01.519307 127145 kubeadm.go:317] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1101 23:14:01.519381 127145 kubeadm.go:317] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1101 23:14:01.519458 127145 kubeadm.go:317] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1101 23:14:01.519533 127145 kubeadm.go:317] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1101 23:14:01.519591 127145 kubeadm.go:317] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1101 23:14:01.591526 127145 kubeadm.go:317] W1101 23:14:01.486750 6857 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I1101 23:14:01.591829 127145 kubeadm.go:317] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
I1101 23:14:01.591936 127145 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1101 23:14:01.592005 127145 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
I1101 23:14:01.592050 127145 kubeadm.go:317] [ERROR Port-2379]: Port 2379 is in use
I1101 23:14:01.592096 127145 kubeadm.go:317] [ERROR Port-2380]: Port 2380 is in use
I1101 23:14:01.592196 127145 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
I1101 23:14:01.592269 127145 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
W1101 23:14:01.592495 127145 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1101 23:14:01.486750 6857 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
I1101 23:14:01.592536 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1101 23:14:01.906961 127145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1101 23:14:01.916443 127145 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1101 23:14:01.916504 127145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1101 23:14:01.923130 127145 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1101 23:14:01.923166 127145 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1101 23:14:01.960923 127145 kubeadm.go:317] [init] Using Kubernetes version: v1.24.6
I1101 23:14:01.960981 127145 kubeadm.go:317] [preflight] Running pre-flight checks
I1101 23:14:01.987846 127145 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
I1101 23:14:01.987918 127145 kubeadm.go:317] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
I1101 23:14:01.987961 127145 kubeadm.go:317] [0;37mOS[0m: [0;32mLinux[0m
I1101 23:14:01.988021 127145 kubeadm.go:317] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1101 23:14:01.988074 127145 kubeadm.go:317] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1101 23:14:01.988115 127145 kubeadm.go:317] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1101 23:14:01.988186 127145 kubeadm.go:317] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1101 23:14:01.988241 127145 kubeadm.go:317] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1101 23:14:01.988304 127145 kubeadm.go:317] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1101 23:14:01.988371 127145 kubeadm.go:317] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1101 23:14:01.988430 127145 kubeadm.go:317] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1101 23:14:01.988521 127145 kubeadm.go:317] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1101 23:14:02.056387 127145 kubeadm.go:317] W1101 23:14:01.956215 7126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I1101 23:14:02.056585 127145 kubeadm.go:317] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
I1101 23:14:02.056677 127145 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1101 23:14:02.056739 127145 kubeadm.go:317] error execution phase preflight: [preflight] Some fatal errors occurred:
I1101 23:14:02.056775 127145 kubeadm.go:317] [ERROR Port-2379]: Port 2379 is in use
I1101 23:14:02.056811 127145 kubeadm.go:317] [ERROR Port-2380]: Port 2380 is in use
I1101 23:14:02.056904 127145 kubeadm.go:317] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
I1101 23:14:02.057006 127145 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
I1101 23:14:02.057085 127145 kubeadm.go:398] StartCluster complete in 4m21.498557806s
I1101 23:14:02.057126 127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1101 23:14:02.057181 127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1101 23:14:02.079779 127145 cri.go:87] found id: ""
I1101 23:14:02.079803 127145 logs.go:274] 0 containers: []
W1101 23:14:02.079811 127145 logs.go:276] No container was found matching "kube-apiserver"
I1101 23:14:02.079820 127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1101 23:14:02.079867 127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1101 23:14:02.102132 127145 cri.go:87] found id: ""
I1101 23:14:02.103963 127145 logs.go:274] 0 containers: []
W1101 23:14:02.103974 127145 logs.go:276] No container was found matching "etcd"
I1101 23:14:02.103987 127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1101 23:14:02.104037 127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1101 23:14:02.127250 127145 cri.go:87] found id: ""
I1101 23:14:02.127271 127145 logs.go:274] 0 containers: []
W1101 23:14:02.127278 127145 logs.go:276] No container was found matching "coredns"
I1101 23:14:02.127282 127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1101 23:14:02.127329 127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1101 23:14:02.149764 127145 cri.go:87] found id: ""
I1101 23:14:02.149785 127145 logs.go:274] 0 containers: []
W1101 23:14:02.149792 127145 logs.go:276] No container was found matching "kube-scheduler"
I1101 23:14:02.149799 127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1101 23:14:02.149851 127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1101 23:14:02.172459 127145 cri.go:87] found id: ""
I1101 23:14:02.172482 127145 logs.go:274] 0 containers: []
W1101 23:14:02.172488 127145 logs.go:276] No container was found matching "kube-proxy"
I1101 23:14:02.172493 127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1101 23:14:02.172532 127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1101 23:14:02.194215 127145 cri.go:87] found id: ""
I1101 23:14:02.194240 127145 logs.go:274] 0 containers: []
W1101 23:14:02.194246 127145 logs.go:276] No container was found matching "kubernetes-dashboard"
I1101 23:14:02.194252 127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1101 23:14:02.194295 127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1101 23:14:02.215924 127145 cri.go:87] found id: ""
I1101 23:14:02.215945 127145 logs.go:274] 0 containers: []
W1101 23:14:02.215951 127145 logs.go:276] No container was found matching "storage-provisioner"
I1101 23:14:02.215961 127145 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1101 23:14:02.216007 127145 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1101 23:14:02.237525 127145 cri.go:87] found id: ""
I1101 23:14:02.237548 127145 logs.go:274] 0 containers: []
W1101 23:14:02.237556 127145 logs.go:276] No container was found matching "kube-controller-manager"
I1101 23:14:02.237568 127145 logs.go:123] Gathering logs for kubelet ...
I1101 23:14:02.237581 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1101 23:14:02.300252 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.121441 4572 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
W1101 23:14:02.300464 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: E1101 23:09:52.121486 4572 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
W1101 23:14:02.300712 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.134778 4572 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
W1101 23:14:02.300934 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: E1101 23:09:52.134833 4572 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
W1101 23:14:02.301104 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.135478 4572 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
W1101 23:14:02.301295 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:52 test-preload-230809 kubelet[4572]: E1101 23:09:52.135507 4572 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
W1101 23:14:02.302724 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.043911 4572 projected.go:192] Error preparing data for projected volume kube-api-access-mxxnh for pod kube-system/kindnet-55wll: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out waiting for the condition]
W1101 23:14:02.303262 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.044015 4572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18a63bc3-b29d-45a5-98a8-3f37cfef3c7b-kube-api-access-mxxnh podName:18a63bc3-b29d-45a5-98a8-3f37cfef3c7b nodeName:}" failed. No retries permitted until 2022-11-01 23:09:55.043985609 +0000 UTC m=+12.036634856 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-mxxnh" (UniqueName: "kubernetes.io/projected/18a63bc3-b29d-45a5-98a8-3f37cfef3c7b-kube-api-access-mxxnh") pod "kindnet-55wll" (UID: "18a63bc3-b29d-45a5-98a8-3f37cfef3c7b") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out wait
ing for the condition]
W1101 23:14:02.303497 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.044035 4572 projected.go:192] Error preparing data for projected volume kube-api-access-k9mj5 for pod kube-system/kube-proxy-mprfx: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out waiting for the condition]
W1101 23:14:02.303931 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.044128 4572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c323cc25-2fa6-4edf-b36c-03da66892a50-kube-api-access-k9mj5 podName:c323cc25-2fa6-4edf-b36c-03da66892a50 nodeName:}" failed. No retries permitted until 2022-11-01 23:09:55.04409823 +0000 UTC m=+12.036747482 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-k9mj5" (UniqueName: "kubernetes.io/projected/c323cc25-2fa6-4edf-b36c-03da66892a50-kube-api-access-k9mj5") pod "kube-proxy-mprfx" (UID: "c323cc25-2fa6-4edf-b36c-03da66892a50") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out
waiting for the condition]
W1101 23:14:02.304244 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.122285 4572 projected.go:192] Error preparing data for projected volume kube-api-access-wfqx2 for pod kube-system/storage-provisioner: [failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out waiting for the condition]
W1101 23:14:02.304666 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.122380 4572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2eb4b78f-b029-431c-a5b6-34253c21c6ae-kube-api-access-wfqx2 podName:2eb4b78f-b029-431c-a5b6-34253c21c6ae nodeName:}" failed. No retries permitted until 2022-11-01 23:09:55.122350449 +0000 UTC m=+12.114999680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wfqx2" (UniqueName: "kubernetes.io/projected/2eb4b78f-b029-431c-a5b6-34253c21c6ae-kube-api-access-wfqx2") pod "storage-provisioner" (UID: "2eb4b78f-b029-431c-a5b6-34253c21c6ae") : [failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cac
he: timed out waiting for the condition]
W1101 23:14:02.305088 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.136572 4572 projected.go:192] Error preparing data for projected volume kube-api-access-2k56t for pod kube-system/coredns-6d4b75cb6d-r4qft: [failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: timed out waiting for the condition]
W1101 23:14:02.305507 127145 logs.go:138] Found kubelet problem: Nov 01 23:09:53 test-preload-230809 kubelet[4572]: E1101 23:09:53.136676 4572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/93ea1e43-1509-4751-a91c-ee8a9f43f870-kube-api-access-2k56t podName:93ea1e43-1509-4751-a91c-ee8a9f43f870 nodeName:}" failed. No retries permitted until 2022-11-01 23:09:54.136638953 +0000 UTC m=+11.129288201 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2k56t" (UniqueName: "kubernetes.io/projected/93ea1e43-1509-4751-a91c-ee8a9f43f870-kube-api-access-2k56t") pod "coredns-6d4b75cb6d-r4qft" (UID: "93ea1e43-1509-4751-a91c-ee8a9f43f870") : [failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:test-preload-230809" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object, failed to sync configmap cache: tim
ed out waiting for the condition]
I1101 23:14:02.328158 127145 logs.go:123] Gathering logs for dmesg ...
I1101 23:14:02.328187 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1101 23:14:02.342140 127145 logs.go:123] Gathering logs for describe nodes ...
I1101 23:14:02.342171 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1101 23:14:02.477646 127145 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1101 23:14:02.477672 127145 logs.go:123] Gathering logs for containerd ...
I1101 23:14:02.477684 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1101 23:14:02.532567 127145 logs.go:123] Gathering logs for container status ...
I1101 23:14:02.532606 127145 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1101 23:14:02.557929 127145 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1101 23:14:01.956215 7126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W1101 23:14:02.557965 127145 out.go:239] *
W1101 23:14:02.558080 127145 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1101 23:14:01.956215 7126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W1101 23:14:02.558101 127145 out.go:239] *
W1101 23:14:02.558873 127145 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1101 23:14:02.561381 127145 out.go:177] X Problems detected in kubelet:
I1101 23:14:02.562697 127145 out.go:177] Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.121441 4572 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
I1101 23:14:02.564125 127145 out.go:177] Nov 01 23:09:52 test-preload-230809 kubelet[4572]: E1101 23:09:52.121486 4572 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
I1101 23:14:02.565464 127145 out.go:177] Nov 01 23:09:52 test-preload-230809 kubelet[4572]: W1101 23:09:52.134778 4572 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:test-preload-230809" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'test-preload-230809' and this object
I1101 23:14:02.568183 127145 out.go:177]
W1101 23:14:02.569498 127145 out.go:239] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.6
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
stderr:
W1101 23:14:01.956215 7126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
W1101 23:14:02.569611 127145 out.go:239] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
W1101 23:14:02.569659 127145 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/5484
I1101 23:14:02.571762 127145 out.go:177]
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
*
* ==> containerd <==
* -- Logs begin at Tue 2022-11-01 23:08:12 UTC, end at Tue 2022-11-01 23:14:03 UTC. --
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.719525032Z" level=error msg="StopPodSandbox for \"\\\"Using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"\\\"Using\": not found"
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.734560810Z" level=info msg="StopPodSandbox for \"this\""
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.734618346Z" level=error msg="StopPodSandbox for \"this\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"this\": not found"
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.749253336Z" level=info msg="StopPodSandbox for \"endpoint\""
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.749297038Z" level=error msg="StopPodSandbox for \"endpoint\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint\": not found"
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.763665423Z" level=info msg="StopPodSandbox for \"is\""
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.763703602Z" level=error msg="StopPodSandbox for \"is\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"is\": not found"
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.778694852Z" level=info msg="StopPodSandbox for \"deprecated,\""
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.778747881Z" level=error msg="StopPodSandbox for \"deprecated,\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"deprecated,\": not found"
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.794360465Z" level=info msg="StopPodSandbox for \"please\""
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.794405615Z" level=error msg="StopPodSandbox for \"please\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"please\": not found"
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.810007645Z" level=info msg="StopPodSandbox for \"consider\""
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.810070144Z" level=error msg="StopPodSandbox for \"consider\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"consider\": not found"
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.825361791Z" level=info msg="StopPodSandbox for \"using\""
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.825415140Z" level=error msg="StopPodSandbox for \"using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"using\": not found"
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.840372611Z" level=info msg="StopPodSandbox for \"full\""
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.840414789Z" level=error msg="StopPodSandbox for \"full\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"full\": not found"
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.856508587Z" level=info msg="StopPodSandbox for \"URL\""
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.856554561Z" level=error msg="StopPodSandbox for \"URL\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL\": not found"
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.870954124Z" level=info msg="StopPodSandbox for \"format\\\"\""
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.871012126Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.886230057Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.886270252Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.902044244Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
Nov 01 23:14:01 test-preload-230809 containerd[3002]: time="2022-11-01T23:14:01.902102673Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
*
* ==> describe nodes <==
*
* ==> dmesg <==
* [ +0.007357] FS-Cache: O-key=[8] '8aa00f0200000000'
[ +0.004958] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
[ +0.006585] FS-Cache: N-cookie d=00000000f5a48031{9p.inode} n=00000000f831f3cd
[ +0.008739] FS-Cache: N-key=[8] '8aa00f0200000000'
[ +0.461145] FS-Cache: Duplicate cookie detected
[ +0.004704] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
[ +0.006773] FS-Cache: O-cookie d=00000000f5a48031{9p.inode} n=00000000be3fb01f
[ +0.007375] FS-Cache: O-key=[8] '9ba00f0200000000'
[ +0.004971] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
[ +0.007981] FS-Cache: N-cookie d=00000000f5a48031{9p.inode} n=0000000004318e07
[ +0.008713] FS-Cache: N-key=[8] '9ba00f0200000000'
[ +34.615849] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Nov 1 23:06] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fc1228290d01
[ +0.000006] ll header: 00000000: 02 42 e4 a2 b0 46 02 42 c0 a8 3a 02 08 00
[ +1.007001] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fc1228290d01
[ +0.000006] ll header: 00000000: 02 42 e4 a2 b0 46 02 42 c0 a8 3a 02 08 00
[ +2.015846] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fc1228290d01
[ +0.000006] ll header: 00000000: 02 42 e4 a2 b0 46 02 42 c0 a8 3a 02 08 00
[ +4.063673] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fc1228290d01
[ +0.000008] ll header: 00000000: 02 42 e4 a2 b0 46 02 42 c0 a8 3a 02 08 00
[ +8.191354] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fc1228290d01
[ +0.000006] ll header: 00000000: 02 42 e4 a2 b0 46 02 42 c0 a8 3a 02 08 00
[Nov 1 23:10] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000405] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.011308] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
*
* ==> kernel <==
* 23:14:03 up 56 min, 0 users, load average: 0.14, 0.48, 0.59
Linux test-preload-230809 5.15.0-1021-gcp #28~20.04.1-Ubuntu SMP Mon Oct 17 11:37:54 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"
*
* ==> kubelet <==
* -- Logs begin at Tue 2022-11-01 23:08:12 UTC, end at Tue 2022-11-01 23:14:03 UTC. --
Nov 01 23:12:27 test-preload-230809 kubelet[4572]: I1101 23:12:27.245759 4572 scope.go:110] "RemoveContainer" containerID="2e39d69d84d2797ec76a606fe198ee1e0feaff253ef2151bf0438a17805b7955"
Nov 01 23:12:27 test-preload-230809 kubelet[4572]: E1101 23:12:27.246301 4572 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=etcd pod=etcd-test-preload-230809_kube-system(37b967577315f9064699b525aec41d0d)\"" pod="kube-system/etcd-test-preload-230809" podUID=37b967577315f9064699b525aec41d0d
Nov 01 23:12:42 test-preload-230809 kubelet[4572]: I1101 23:12:42.245149 4572 scope.go:110] "RemoveContainer" containerID="2e39d69d84d2797ec76a606fe198ee1e0feaff253ef2151bf0438a17805b7955"
Nov 01 23:12:42 test-preload-230809 kubelet[4572]: E1101 23:12:42.245496 4572 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=etcd pod=etcd-test-preload-230809_kube-system(37b967577315f9064699b525aec41d0d)\"" pod="kube-system/etcd-test-preload-230809" podUID=37b967577315f9064699b525aec41d0d
Nov 01 23:12:54 test-preload-230809 kubelet[4572]: I1101 23:12:54.245511 4572 scope.go:110] "RemoveContainer" containerID="2e39d69d84d2797ec76a606fe198ee1e0feaff253ef2151bf0438a17805b7955"
Nov 01 23:12:54 test-preload-230809 kubelet[4572]: I1101 23:12:54.741370 4572 scope.go:110] "RemoveContainer" containerID="2e39d69d84d2797ec76a606fe198ee1e0feaff253ef2151bf0438a17805b7955"
Nov 01 23:12:54 test-preload-230809 kubelet[4572]: I1101 23:12:54.741691 4572 scope.go:110] "RemoveContainer" containerID="98fd6e3f6f2000eed38807a1e0c5a46aa809b92a0a8341daafb19de01eda1eb8"
Nov 01 23:12:54 test-preload-230809 kubelet[4572]: E1101 23:12:54.742118 4572 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-230809_kube-system(37b967577315f9064699b525aec41d0d)\"" pod="kube-system/etcd-test-preload-230809" podUID=37b967577315f9064699b525aec41d0d
Nov 01 23:12:59 test-preload-230809 kubelet[4572]: I1101 23:12:59.347713 4572 scope.go:110] "RemoveContainer" containerID="98fd6e3f6f2000eed38807a1e0c5a46aa809b92a0a8341daafb19de01eda1eb8"
Nov 01 23:12:59 test-preload-230809 kubelet[4572]: E1101 23:12:59.348045 4572 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-230809_kube-system(37b967577315f9064699b525aec41d0d)\"" pod="kube-system/etcd-test-preload-230809" podUID=37b967577315f9064699b525aec41d0d
Nov 01 23:13:01 test-preload-230809 kubelet[4572]: I1101 23:13:01.926828 4572 scope.go:110] "RemoveContainer" containerID="98fd6e3f6f2000eed38807a1e0c5a46aa809b92a0a8341daafb19de01eda1eb8"
Nov 01 23:13:01 test-preload-230809 kubelet[4572]: E1101 23:13:01.927359 4572 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-230809_kube-system(37b967577315f9064699b525aec41d0d)\"" pod="kube-system/etcd-test-preload-230809" podUID=37b967577315f9064699b525aec41d0d
Nov 01 23:13:02 test-preload-230809 kubelet[4572]: I1101 23:13:02.759079 4572 scope.go:110] "RemoveContainer" containerID="98fd6e3f6f2000eed38807a1e0c5a46aa809b92a0a8341daafb19de01eda1eb8"
Nov 01 23:13:02 test-preload-230809 kubelet[4572]: E1101 23:13:02.759443 4572 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-230809_kube-system(37b967577315f9064699b525aec41d0d)\"" pod="kube-system/etcd-test-preload-230809" podUID=37b967577315f9064699b525aec41d0d
Nov 01 23:13:15 test-preload-230809 kubelet[4572]: I1101 23:13:15.245201 4572 scope.go:110] "RemoveContainer" containerID="98fd6e3f6f2000eed38807a1e0c5a46aa809b92a0a8341daafb19de01eda1eb8"
Nov 01 23:13:15 test-preload-230809 kubelet[4572]: E1101 23:13:15.245539 4572 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-230809_kube-system(37b967577315f9064699b525aec41d0d)\"" pod="kube-system/etcd-test-preload-230809" podUID=37b967577315f9064699b525aec41d0d
Nov 01 23:13:28 test-preload-230809 kubelet[4572]: I1101 23:13:28.244979 4572 scope.go:110] "RemoveContainer" containerID="98fd6e3f6f2000eed38807a1e0c5a46aa809b92a0a8341daafb19de01eda1eb8"
Nov 01 23:13:28 test-preload-230809 kubelet[4572]: E1101 23:13:28.245543 4572 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-230809_kube-system(37b967577315f9064699b525aec41d0d)\"" pod="kube-system/etcd-test-preload-230809" podUID=37b967577315f9064699b525aec41d0d
Nov 01 23:13:41 test-preload-230809 kubelet[4572]: I1101 23:13:41.245914 4572 scope.go:110] "RemoveContainer" containerID="98fd6e3f6f2000eed38807a1e0c5a46aa809b92a0a8341daafb19de01eda1eb8"
Nov 01 23:13:41 test-preload-230809 kubelet[4572]: E1101 23:13:41.246296 4572 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-230809_kube-system(37b967577315f9064699b525aec41d0d)\"" pod="kube-system/etcd-test-preload-230809" podUID=37b967577315f9064699b525aec41d0d
Nov 01 23:13:54 test-preload-230809 kubelet[4572]: I1101 23:13:54.245553 4572 scope.go:110] "RemoveContainer" containerID="98fd6e3f6f2000eed38807a1e0c5a46aa809b92a0a8341daafb19de01eda1eb8"
Nov 01 23:13:54 test-preload-230809 kubelet[4572]: E1101 23:13:54.245880 4572 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-230809_kube-system(37b967577315f9064699b525aec41d0d)\"" pod="kube-system/etcd-test-preload-230809" podUID=37b967577315f9064699b525aec41d0d
Nov 01 23:13:59 test-preload-230809 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
Nov 01 23:13:59 test-preload-230809 systemd[1]: kubelet.service: Succeeded.
Nov 01 23:13:59 test-preload-230809 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
-- /stdout --
** stderr **
E1101 23:14:03.605087 132226 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
! unable to fetch logs for: describe nodes
** /stderr **
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-230809 -n test-preload-230809
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-230809 -n test-preload-230809: exit status 2 (344.059582ms)
-- stdout --
Stopped
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "test-preload-230809" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-230809" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p test-preload-230809
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-230809: (2.342431706s)
--- FAIL: TestPreload (356.66s)