=== RUN TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade
=== CONT TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220728205630-9812 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd
=== CONT TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220728205630-9812 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: (49.801526054s)
version_upgrade_test.go:234: (dbg) Run: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220728205630-9812
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220728205630-9812: (2.395008557s)
version_upgrade_test.go:239: (dbg) Run: out/minikube-linux-amd64 -p kubernetes-upgrade-20220728205630-9812 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220728205630-9812 status --format={{.Host}}: exit status 7 (136.823578ms)
-- stdout --
Stopped
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220728205630-9812 --memory=2200 --kubernetes-version=v1.24.3 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd
=== CONT TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220728205630-9812 --memory=2200 --kubernetes-version=v1.24.3 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: exit status 109 (8m47.214151725s)
-- stdout --
* [kubernetes-upgrade-20220728205630-9812] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=14555
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the docker driver based on existing profile
* Starting control plane node kubernetes-upgrade-20220728205630-9812 in cluster kubernetes-upgrade-20220728205630-9812
* Pulling base image ...
* Restarting existing docker container for "kubernetes-upgrade-20220728205630-9812" ...
* Preparing Kubernetes v1.24.3 on containerd 1.6.6 ...
- kubelet.cni-conf-dir=/etc/cni/net.mk
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
X Problems detected in kubelet:
Jul 28 21:06:09 kubernetes-upgrade-20220728205630-9812 kubelet[11608]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
-- /stdout --
** stderr **
I0728 20:57:22.866399 160802 out.go:296] Setting OutFile to fd 1 ...
I0728 20:57:22.866524 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 20:57:22.866534 160802 out.go:309] Setting ErrFile to fd 2...
I0728 20:57:22.866541 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 20:57:22.866690 160802 root.go:332] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
I0728 20:57:22.867437 160802 out.go:303] Setting JSON to false
I0728 20:57:22.869980 160802 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2393,"bootTime":1659039450,"procs":941,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1013-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0728 20:57:22.870074 160802 start.go:125] virtualization: kvm guest
I0728 20:57:22.872793 160802 out.go:177] * [kubernetes-upgrade-20220728205630-9812] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
I0728 20:57:22.874928 160802 out.go:177] - MINIKUBE_LOCATION=14555
I0728 20:57:22.874850 160802 notify.go:193] Checking for updates...
I0728 20:57:22.874990 160802 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-containerd-overlay2-amd64.tar.lz4
I0728 20:57:22.877463 160802 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0728 20:57:22.879433 160802 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
I0728 20:57:22.881270 160802 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
I0728 20:57:22.883209 160802 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0728 20:57:22.887381 160802 config.go:178] Loaded profile config "kubernetes-upgrade-20220728205630-9812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
I0728 20:57:22.888391 160802 driver.go:365] Setting default libvirt URI to qemu:///system
I0728 20:57:22.961516 160802 docker.go:137] docker version: linux-20.10.17
I0728 20:57:22.961654 160802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0728 20:57:23.046603 160802 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-containerd-overlay2-amd64.tar.lz4.checksum
I0728 20:57:23.157053 160802 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:true NGoroutines:71 SystemTime:2022-07-28 20:57:23.020323245 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0728 20:57:23.157164 160802 docker.go:254] overlay module found
I0728 20:57:23.160556 160802 out.go:177] * Using the docker driver based on existing profile
I0728 20:57:23.161955 160802 start.go:284] selected driver: docker
I0728 20:57:23.161984 160802 start.go:808] validating driver "docker" against &{Name:kubernetes-upgrade-20220728205630-9812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220728205630-9
812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0728 20:57:23.162137 160802 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0728 20:57:23.163457 160802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0728 20:57:23.314494 160802 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:true NGoroutines:71 SystemTime:2022-07-28 20:57:23.19970577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0728 20:57:23.314857 160802 cni.go:95] Creating CNI manager for ""
I0728 20:57:23.314919 160802 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0728 20:57:23.314937 160802 start_flags.go:310] config:
{Name:kubernetes-upgrade-20220728205630-9812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:kubernetes-upgrade-20220728205630-9812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0728 20:57:23.317480 160802 out.go:177] * Starting control plane node kubernetes-upgrade-20220728205630-9812 in cluster kubernetes-upgrade-20220728205630-9812
I0728 20:57:23.318985 160802 cache.go:120] Beginning downloading kic base image for docker with containerd
I0728 20:57:23.320334 160802 out.go:177] * Pulling base image ...
I0728 20:57:23.321840 160802 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime containerd
I0728 20:57:23.321879 160802 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
I0728 20:57:23.321906 160802 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4
I0728 20:57:23.321920 160802 cache.go:57] Caching tarball of preloaded images
I0728 20:57:23.322218 160802 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0728 20:57:23.322242 160802 cache.go:60] Finished verifying existence of preloaded tar for v1.24.3 on containerd
I0728 20:57:23.322442 160802 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728205630-9812/config.json ...
I0728 20:57:23.377916 160802 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
I0728 20:57:23.377951 160802 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
I0728 20:57:23.377974 160802 cache.go:208] Successfully downloaded all kic artifacts
I0728 20:57:23.378027 160802 start.go:370] acquiring machines lock for kubernetes-upgrade-20220728205630-9812: {Name:mk7be54e287cbff99b673df45d9b1f000bca8d24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0728 20:57:23.378178 160802 start.go:374] acquired machines lock for "kubernetes-upgrade-20220728205630-9812" in 98.875µs
I0728 20:57:23.378207 160802 start.go:95] Skipping create...Using existing machine configuration
I0728 20:57:23.378218 160802 fix.go:55] fixHost starting:
I0728 20:57:23.378543 160802 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220728205630-9812 --format={{.State.Status}}
I0728 20:57:23.430980 160802 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220728205630-9812: state=Stopped err=<nil>
W0728 20:57:23.431017 160802 fix.go:129] unexpected machine state, will restart: <nil>
I0728 20:57:23.436072 160802 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-20220728205630-9812" ...
I0728 20:57:23.437700 160802 cli_runner.go:164] Run: docker start kubernetes-upgrade-20220728205630-9812
I0728 20:57:23.977820 160802 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220728205630-9812 --format={{.State.Status}}
I0728 20:57:24.042235 160802 kic.go:415] container "kubernetes-upgrade-20220728205630-9812" state is running.
I0728 20:57:24.042750 160802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220728205630-9812
I0728 20:57:24.094180 160802 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728205630-9812/config.json ...
I0728 20:57:24.094428 160802 machine.go:88] provisioning docker machine ...
I0728 20:57:24.094453 160802 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220728205630-9812"
I0728 20:57:24.094502 160802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728205630-9812
I0728 20:57:24.146439 160802 main.go:134] libmachine: Using SSH client type: native
I0728 20:57:24.146650 160802 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7daec0] 0x7ddf20 <nil> [] 0s} 127.0.0.1 49337 <nil> <nil>}
I0728 20:57:24.146670 160802 main.go:134] libmachine: About to run SSH command:
sudo hostname kubernetes-upgrade-20220728205630-9812 && echo "kubernetes-upgrade-20220728205630-9812" | sudo tee /etc/hostname
I0728 20:57:24.147615 160802 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34250->127.0.0.1:49337: read: connection reset by peer
I0728 20:57:27.293929 160802 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220728205630-9812
I0728 20:57:27.294016 160802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728205630-9812
I0728 20:57:27.349629 160802 main.go:134] libmachine: Using SSH client type: native
I0728 20:57:27.349838 160802 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7daec0] 0x7ddf20 <nil> [] 0s} 127.0.0.1 49337 <nil> <nil>}
I0728 20:57:27.349875 160802 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\skubernetes-upgrade-20220728205630-9812' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220728205630-9812/g' /etc/hosts;
else
echo '127.0.1.1 kubernetes-upgrade-20220728205630-9812' | sudo tee -a /etc/hosts;
fi
fi
I0728 20:57:27.485165 160802 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0728 20:57:27.485218 160802 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
I0728 20:57:27.485259 160802 ubuntu.go:177] setting up certificates
I0728 20:57:27.485271 160802 provision.go:83] configureAuth start
I0728 20:57:27.485388 160802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220728205630-9812
I0728 20:57:27.549901 160802 provision.go:138] copyHostCerts
I0728 20:57:27.549980 160802 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
I0728 20:57:27.550000 160802 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
I0728 20:57:27.550089 160802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1078 bytes)
I0728 20:57:27.550229 160802 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
I0728 20:57:27.550247 160802 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
I0728 20:57:27.550293 160802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
I0728 20:57:27.550376 160802 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
I0728 20:57:27.550387 160802 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
I0728 20:57:27.550424 160802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
I0728 20:57:27.550484 160802 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220728205630-9812 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220728205630-9812]
I0728 20:57:27.634672 160802 provision.go:172] copyRemoteCerts
I0728 20:57:27.634765 160802 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0728 20:57:27.634829 160802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728205630-9812
I0728 20:57:27.679230 160802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/kubernetes-upgrade-20220728205630-9812/id_rsa Username:docker}
I0728 20:57:27.773133 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0728 20:57:27.795592 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
I0728 20:57:27.817587 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0728 20:57:27.842683 160802 provision.go:86] duration metric: configureAuth took 357.40006ms
I0728 20:57:27.842711 160802 ubuntu.go:193] setting minikube options for container-runtime
I0728 20:57:27.842965 160802 config.go:178] Loaded profile config "kubernetes-upgrade-20220728205630-9812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
I0728 20:57:27.842986 160802 machine.go:91] provisioned docker machine in 3.748541238s
I0728 20:57:27.842997 160802 start.go:307] post-start starting for "kubernetes-upgrade-20220728205630-9812" (driver="docker")
I0728 20:57:27.843005 160802 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0728 20:57:27.843059 160802 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0728 20:57:27.843101 160802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728205630-9812
I0728 20:57:27.892044 160802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/kubernetes-upgrade-20220728205630-9812/id_rsa Username:docker}
I0728 20:57:28.007429 160802 ssh_runner.go:195] Run: cat /etc/os-release
I0728 20:57:28.012518 160802 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0728 20:57:28.012548 160802 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0728 20:57:28.012561 160802 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0728 20:57:28.012569 160802 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0728 20:57:28.012582 160802 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
I0728 20:57:28.012639 160802 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
I0728 20:57:28.012738 160802 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98122.pem -> 98122.pem in /etc/ssl/certs
I0728 20:57:28.012857 160802 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0728 20:57:28.028732 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98122.pem --> /etc/ssl/certs/98122.pem (1708 bytes)
I0728 20:57:28.063656 160802 start.go:310] post-start completed in 220.643878ms
I0728 20:57:28.063742 160802 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0728 20:57:28.063789 160802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728205630-9812
I0728 20:57:28.105141 160802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/kubernetes-upgrade-20220728205630-9812/id_rsa Username:docker}
I0728 20:57:28.195810 160802 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0728 20:57:28.200454 160802 fix.go:57] fixHost completed within 4.822228352s
I0728 20:57:28.200488 160802 start.go:82] releasing machines lock for "kubernetes-upgrade-20220728205630-9812", held for 4.822292833s
I0728 20:57:28.200590 160802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220728205630-9812
I0728 20:57:28.243411 160802 ssh_runner.go:195] Run: systemctl --version
I0728 20:57:28.243440 160802 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0728 20:57:28.243476 160802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728205630-9812
I0728 20:57:28.243504 160802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728205630-9812
I0728 20:57:28.287248 160802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/kubernetes-upgrade-20220728205630-9812/id_rsa Username:docker}
I0728 20:57:28.288096 160802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/kubernetes-upgrade-20220728205630-9812/id_rsa Username:docker}
I0728 20:57:28.410776 160802 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0728 20:57:28.427057 160802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0728 20:57:28.439749 160802 docker.go:188] disabling docker service ...
I0728 20:57:28.439806 160802 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0728 20:57:28.453023 160802 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0728 20:57:28.464039 160802 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0728 20:57:28.579586 160802 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0728 20:57:28.672569 160802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0728 20:57:28.684090 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0728 20:57:28.700124 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
I0728 20:57:28.709944 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
I0728 20:57:28.721125 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
I0728 20:57:28.730446 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
I0728 20:57:28.741123 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
I0728 20:57:28.752176 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %s "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
I0728 20:57:28.770413 160802 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0728 20:57:28.778286 160802 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0728 20:57:28.787192 160802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0728 20:57:28.881447 160802 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0728 20:57:28.967021 160802 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
I0728 20:57:28.967083 160802 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0728 20:57:28.970732 160802 start.go:471] Will wait 60s for crictl version
I0728 20:57:28.970799 160802 ssh_runner.go:195] Run: sudo crictl version
I0728 20:57:29.008345 160802 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:
stderr:
time="2022-07-28T20:57:29Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
I0728 20:57:40.055149 160802 ssh_runner.go:195] Run: sudo crictl version
I0728 20:57:40.090720 160802 start.go:480] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.6.6
RuntimeApiVersion: v1alpha2
I0728 20:57:40.090790 160802 ssh_runner.go:195] Run: containerd --version
I0728 20:57:40.127526 160802 ssh_runner.go:195] Run: containerd --version
I0728 20:57:40.215915 160802 out.go:177] * Preparing Kubernetes v1.24.3 on containerd 1.6.6 ...
I0728 20:57:40.321010 160802 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220728205630-9812 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0728 20:57:40.371607 160802 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I0728 20:57:40.378442 160802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0728 20:57:40.399556 160802 out.go:177] - kubelet.cni-conf-dir=/etc/cni/net.mk
I0728 20:57:40.400845 160802 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime containerd
I0728 20:57:40.400931 160802 ssh_runner.go:195] Run: sudo crictl images --output json
I0728 20:57:40.439928 160802 containerd.go:543] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.3". assuming images are not preloaded.
I0728 20:57:40.440011 160802 ssh_runner.go:195] Run: which lz4
I0728 20:57:40.444389 160802 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0728 20:57:40.449051 160802 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I0728 20:57:40.449090 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (447643024 bytes)
I0728 20:57:41.779435 160802 containerd.go:490] Took 1.335091 seconds to copy over tarball
I0728 20:57:41.779512 160802 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0728 20:57:46.119076 160802 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.339523499s)
I0728 20:57:46.119107 160802 containerd.go:497] Took 4.339639 seconds t extract the tarball
I0728 20:57:46.119121 160802 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0728 20:57:46.268120 160802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0728 20:57:46.361829 160802 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0728 20:57:46.611465 160802 ssh_runner.go:195] Run: sudo crictl images --output json
I0728 20:57:46.660449 160802 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.3 k8s.gcr.io/kube-controller-manager:v1.24.3 k8s.gcr.io/kube-scheduler:v1.24.3 k8s.gcr.io/kube-proxy:v1.24.3 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
I0728 20:57:46.660535 160802 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0728 20:57:46.660535 160802 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.3
I0728 20:57:46.660640 160802 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
I0728 20:57:46.660754 160802 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.3
I0728 20:57:46.660778 160802 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
I0728 20:57:46.660844 160802 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.3
I0728 20:57:46.660759 160802 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
I0728 20:57:46.660974 160802 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.3
I0728 20:57:46.662604 160802 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.3: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.3
I0728 20:57:46.662671 160802 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0728 20:57:46.662721 160802 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
I0728 20:57:46.662925 160802 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
I0728 20:57:46.662857 160802 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.3: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.3
I0728 20:57:46.662985 160802 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
I0728 20:57:46.662608 160802 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.3: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.3
I0728 20:57:46.663093 160802 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.3: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.3
I0728 20:57:47.155423 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
I0728 20:57:47.162590 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.3"
I0728 20:57:47.167294 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.3"
I0728 20:57:47.198831 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.3"
I0728 20:57:47.207411 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
I0728 20:57:47.207767 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.3"
I0728 20:57:47.210394 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
I0728 20:57:47.535745 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
I0728 20:57:48.030433 160802 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
I0728 20:57:48.073678 160802 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
I0728 20:57:48.073742 160802 ssh_runner.go:195] Run: which crictl
I0728 20:57:48.043910 160802 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.3" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.3" does not exist at hash "586c112956dfc2de95aef392cbfcbfa2b579c332993079ed4d13108ff2409f2f" in container runtime
I0728 20:57:48.073854 160802 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.3
I0728 20:57:48.073882 160802 ssh_runner.go:195] Run: which crictl
I0728 20:57:48.120097 160802 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.3" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.3" does not exist at hash "3a5aa3a515f5d28b31ac5410cfaa56ddbbec1c4e88cbdf711db9de6bbf6b00b0" in container runtime
I0728 20:57:48.120164 160802 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.3
I0728 20:57:48.120209 160802 ssh_runner.go:195] Run: which crictl
I0728 20:57:48.264494 160802 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.3": (1.065614646s)
I0728 20:57:48.264546 160802 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.3" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.3" does not exist at hash "2ae1ba6417cbcd0b381139277508ddbebd0cf055344b710f7ea16e4da954a302" in container runtime
I0728 20:57:48.264576 160802 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.3
I0728 20:57:48.264595 160802 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7": (1.057140613s)
I0728 20:57:48.264623 160802 ssh_runner.go:195] Run: which crictl
I0728 20:57:48.264649 160802 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
I0728 20:57:48.264674 160802 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.3": (1.056873554s)
I0728 20:57:48.264692 160802 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
I0728 20:57:48.264727 160802 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6": (1.054299086s)
I0728 20:57:48.264750 160802 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
I0728 20:57:48.264759 160802 ssh_runner.go:195] Run: which crictl
I0728 20:57:48.264774 160802 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
I0728 20:57:48.264801 160802 ssh_runner.go:195] Run: which crictl
I0728 20:57:48.264700 160802 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.3" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.3" does not exist at hash "d521dd763e2e345a72534dd1503df3f5a14645ccb3fb0c0dd672fdd6da8853db" in container runtime
I0728 20:57:48.264833 160802 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.3
I0728 20:57:48.264857 160802 ssh_runner.go:195] Run: which crictl
I0728 20:57:48.360890 160802 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
I0728 20:57:48.360937 160802 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I0728 20:57:48.360980 160802 ssh_runner.go:195] Run: which crictl
I0728 20:57:48.361007 160802 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.3
I0728 20:57:48.361085 160802 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
I0728 20:57:48.361101 160802 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.3
I0728 20:57:48.361122 160802 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.3
I0728 20:57:48.361189 160802 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
I0728 20:57:48.361235 160802 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.3
I0728 20:57:48.361308 160802 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
I0728 20:57:49.192141 160802 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.3
I0728 20:57:49.192251 160802 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.3
I0728 20:57:49.192342 160802 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
I0728 20:57:49.192393 160802 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
I0728 20:57:49.192584 160802 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.3
I0728 20:57:49.192656 160802 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.3
I0728 20:57:49.192769 160802 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I0728 20:57:49.192863 160802 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.3
I0728 20:57:49.192916 160802 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.3
I0728 20:57:49.197188 160802 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
I0728 20:57:49.197309 160802 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
I0728 20:57:49.197385 160802 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
I0728 20:57:49.197466 160802 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
I0728 20:57:49.197528 160802 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.3
I0728 20:57:49.197586 160802 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.3
I0728 20:57:49.265698 160802 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
I0728 20:57:49.265835 160802 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
I0728 20:57:49.265940 160802 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.24.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.3: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.24.3': No such file or directory
I0728 20:57:49.265968 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.3 --> /var/lib/minikube/images/kube-controller-manager_v1.24.3 (31038464 bytes)
I0728 20:57:49.266038 160802 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
I0728 20:57:49.266057 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (102146048 bytes)
I0728 20:57:49.266131 160802 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.24.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.3: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.24.3': No such file or directory
I0728 20:57:49.266142 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.3 --> /var/lib/minikube/images/kube-proxy_v1.24.3 (39518208 bytes)
I0728 20:57:49.266199 160802 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.24.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.3: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.24.3': No such file or directory
I0728 20:57:49.266258 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.3 --> /var/lib/minikube/images/kube-scheduler_v1.24.3 (15491584 bytes)
I0728 20:57:49.266324 160802 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.24.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.3: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.24.3': No such file or directory
I0728 20:57:49.266339 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.3 --> /var/lib/minikube/images/kube-apiserver_v1.24.3 (33799168 bytes)
I0728 20:57:49.266400 160802 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/pause_3.7': No such file or directory
I0728 20:57:49.266424 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (311296 bytes)
I0728 20:57:49.266491 160802 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
I0728 20:57:49.266513 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (13586432 bytes)
I0728 20:57:49.279837 160802 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
I0728 20:57:49.279880 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
I0728 20:57:49.371984 160802 containerd.go:227] Loading image: /var/lib/minikube/images/pause_3.7
I0728 20:57:49.372079 160802 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
I0728 20:57:49.703157 160802 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
I0728 20:57:49.703214 160802 containerd.go:227] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I0728 20:57:49.703268 160802 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
I0728 20:57:53.182209 160802 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5: (3.478901177s)
I0728 20:57:53.182240 160802 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I0728 20:57:53.182266 160802 containerd.go:227] Loading image: /var/lib/minikube/images/coredns_v1.8.6
I0728 20:57:53.182312 160802 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
I0728 20:57:54.477690 160802 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6: (1.295324483s)
I0728 20:57:54.477732 160802 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
I0728 20:57:54.477769 160802 containerd.go:227] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.3
I0728 20:57:54.477832 160802 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.24.3
I0728 20:57:56.764402 160802 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.24.3: (2.286535978s)
I0728 20:57:56.764440 160802 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.3 from cache
I0728 20:57:56.764474 160802 containerd.go:227] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.3
I0728 20:57:56.764543 160802 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.24.3
I0728 20:57:58.756812 160802 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.24.3: (1.992235085s)
I0728 20:57:58.756851 160802 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.3 from cache
I0728 20:57:58.756878 160802 containerd.go:227] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.3
I0728 20:57:58.756925 160802 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.24.3
I0728 20:58:00.824477 160802 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.24.3: (2.067519209s)
I0728 20:58:00.824511 160802 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.3 from cache
I0728 20:58:00.824535 160802 containerd.go:227] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.3
I0728 20:58:00.824582 160802 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.24.3
I0728 20:58:06.955653 160802 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.24.3: (6.131039123s)
I0728 20:58:06.955693 160802 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.3 from cache
I0728 20:58:06.955736 160802 containerd.go:227] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
I0728 20:58:06.955816 160802 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
I0728 20:58:12.820155 160802 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (5.864301374s)
I0728 20:58:12.820191 160802 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
I0728 20:58:12.820215 160802 cache_images.go:123] Successfully loaded all cached images
I0728 20:58:12.820221 160802 cache_images.go:92] LoadImages completed in 26.159738226s
I0728 20:58:12.820281 160802 ssh_runner.go:195] Run: sudo crictl info
I0728 20:58:12.857260 160802 cni.go:95] Creating CNI manager for ""
I0728 20:58:12.857288 160802 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0728 20:58:12.857302 160802 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0728 20:58:12.857314 160802 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220728205630-9812 NodeName:kubernetes-upgrade-20220728205630-9812 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cg
roupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0728 20:58:12.857459 160802 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "kubernetes-upgrade-20220728205630-9812"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0728 20:58:12.857542 160802 kubeadm.go:961] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-20220728205630-9812 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.3 ClusterName:kubernetes-upgrade-20220728205630-9812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0728 20:58:12.857591 160802 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
I0728 20:58:12.866266 160802 binaries.go:44] Found k8s binaries, skipping transfer
I0728 20:58:12.866340 160802 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0728 20:58:12.875028 160802 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (562 bytes)
I0728 20:58:12.890926 160802 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0728 20:58:12.907372 160802 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2067 bytes)
I0728 20:58:12.937483 160802 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I0728 20:58:12.942247 160802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0728 20:58:12.958278 160802 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728205630-9812 for IP: 192.168.67.2
I0728 20:58:12.958405 160802 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
I0728 20:58:12.958465 160802 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
I0728 20:58:12.958574 160802 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728205630-9812/client.key
I0728 20:58:12.958656 160802 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728205630-9812/apiserver.key.c7fa3a9e
I0728 20:58:12.958720 160802 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728205630-9812/proxy-client.key
I0728 20:58:12.958857 160802 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/9812.pem (1338 bytes)
W0728 20:58:12.959051 160802 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/9812_empty.pem, impossibly tiny 0 bytes
I0728 20:58:12.959082 160802 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
I0728 20:58:12.959123 160802 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1078 bytes)
I0728 20:58:12.959177 160802 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
I0728 20:58:12.959226 160802 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
I0728 20:58:12.959290 160802 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98122.pem (1708 bytes)
I0728 20:58:12.960147 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728205630-9812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0728 20:58:12.986028 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728205630-9812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0728 20:58:13.007015 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728205630-9812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0728 20:58:13.037224 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728205630-9812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0728 20:58:13.064120 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0728 20:58:13.084718 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0728 20:58:13.105796 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0728 20:58:13.140283 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0728 20:58:13.164951 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/9812.pem --> /usr/share/ca-certificates/9812.pem (1338 bytes)
I0728 20:58:13.186809 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98122.pem --> /usr/share/ca-certificates/98122.pem (1708 bytes)
I0728 20:58:13.208400 160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0728 20:58:13.239182 160802 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0728 20:58:13.259410 160802 ssh_runner.go:195] Run: openssl version
I0728 20:58:13.265177 160802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9812.pem && ln -fs /usr/share/ca-certificates/9812.pem /etc/ssl/certs/9812.pem"
I0728 20:58:13.274469 160802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9812.pem
I0728 20:58:13.278647 160802 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 20:32 /usr/share/ca-certificates/9812.pem
I0728 20:58:13.278718 160802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9812.pem
I0728 20:58:13.284850 160802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9812.pem /etc/ssl/certs/51391683.0"
I0728 20:58:13.293605 160802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98122.pem && ln -fs /usr/share/ca-certificates/98122.pem /etc/ssl/certs/98122.pem"
I0728 20:58:13.302462 160802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98122.pem
I0728 20:58:13.306595 160802 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 20:32 /usr/share/ca-certificates/98122.pem
I0728 20:58:13.306655 160802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98122.pem
I0728 20:58:13.314598 160802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98122.pem /etc/ssl/certs/3ec20f2e.0"
I0728 20:58:13.326004 160802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0728 20:58:13.338581 160802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0728 20:58:13.343850 160802 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 20:27 /usr/share/ca-certificates/minikubeCA.pem
I0728 20:58:13.343923 160802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0728 20:58:13.352223 160802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0728 20:58:13.363049 160802 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220728205630-9812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:kubernetes-upgrade-20220728205630-9812 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0728 20:58:13.363159 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0728 20:58:13.363205 160802 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0728 20:58:13.391718 160802 cri.go:87] found id: ""
I0728 20:58:13.391791 160802 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0728 20:58:13.400607 160802 kubeadm.go:410] found existing configuration files, will attempt cluster restart
I0728 20:58:13.400640 160802 kubeadm.go:626] restartCluster start
I0728 20:58:13.400688 160802 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0728 20:58:13.409284 160802 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0728 20:58:13.409792 160802 kubeconfig.go:116] verify returned: extract IP: "kubernetes-upgrade-20220728205630-9812" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
I0728 20:58:13.409950 160802 kubeconfig.go:127] "kubernetes-upgrade-20220728205630-9812" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig - will repair!
I0728 20:58:13.410285 160802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mka3434310bc9890bf6f7ac8ad0a69157716fb18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0728 20:58:13.411295 160802 kapi.go:59] client config for kubernetes-upgrade-20220728205630-9812: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728205630-9812/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/
profiles/kubernetes-upgrade-20220728205630-9812/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x173e480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0728 20:58:13.411974 160802 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0728 20:58:13.423206 160802 kubeadm.go:593] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml 2022-07-28 20:56:48.392908105 +0000
+++ /var/tmp/minikube/kubeadm.yaml.new 2022-07-28 20:58:12.932926410 +0000
@@ -1,4 +1,4 @@
-apiVersion: kubeadm.k8s.io/v1beta1
+apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
@@ -17,7 +17,7 @@
node-ip: 192.168.67.2
taints: []
---
-apiVersion: kubeadm.k8s.io/v1beta1
+apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
@@ -31,16 +31,14 @@
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
-clusterName: kubernetes-upgrade-20220728205630-9812
+clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
-dns:
- type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
-kubernetesVersion: v1.16.0
+ proxy-refresh-interval: "70000"
+kubernetesVersion: v1.24.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
-- /stdout --
I0728 20:58:13.423255 160802 kubeadm.go:1092] stopping kube-system containers ...
I0728 20:58:13.423270 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0728 20:58:13.423348 160802 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0728 20:58:13.473793 160802 cri.go:87] found id: ""
I0728 20:58:13.473862 160802 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0728 20:58:13.485672 160802 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0728 20:58:13.494399 160802 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5755 Jul 28 20:56 /etc/kubernetes/admin.conf
-rw------- 1 root root 5791 Jul 28 20:56 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 5955 Jul 28 20:56 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5739 Jul 28 20:56 /etc/kubernetes/scheduler.conf
I0728 20:58:13.494459 160802 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0728 20:58:13.503458 160802 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0728 20:58:13.514285 160802 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0728 20:58:13.525686 160802 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0728 20:58:13.537391 160802 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0728 20:58:13.549491 160802 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0728 20:58:13.549527 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0728 20:58:13.599044 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0728 20:58:14.058149 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0728 20:58:14.280463 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0728 20:58:14.344454 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0728 20:58:14.399281 160802 api_server.go:51] waiting for apiserver process to appear ...
I0728 20:58:14.399364 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:14.920590 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:15.420026 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:15.920602 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:16.419998 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:16.920816 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:17.420237 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:17.920985 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:18.420113 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:18.920621 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:19.420630 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:19.920026 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:20.420202 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:20.920659 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:21.423007 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:21.920093 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:22.419968 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:22.920289 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:23.420526 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:23.920050 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:24.420011 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:24.920801 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:25.420093 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:25.920316 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:26.420382 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:26.920066 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:27.420968 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:27.920877 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:28.420216 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:28.919971 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:29.420285 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:29.920673 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:30.420755 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:30.919959 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:31.420199 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:31.920995 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:32.419973 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:32.920641 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:33.420167 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:33.920055 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:34.420665 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:34.920160 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:35.420181 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:35.920188 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:36.420351 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:36.920313 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:37.420026 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:37.920666 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:38.420282 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:38.920035 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:39.420431 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:39.920870 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:40.420909 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:40.920319 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:41.420805 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:41.920637 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:42.420645 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:42.920288 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:43.420992 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:43.920160 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:44.420867 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:44.920354 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:45.420769 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:45.920598 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:46.420272 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:46.920942 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:47.420411 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:47.920934 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:48.420775 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:48.920710 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:49.420908 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:49.920710 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:50.420755 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:50.920219 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:51.420042 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:51.920791 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:52.420015 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:52.920832 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:53.420302 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:53.920396 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:54.420913 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:54.920144 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:55.420979 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:55.920808 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:56.420599 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:56.920767 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:57.420403 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:57.920243 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:58.420793 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:58.920556 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:59.420248 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:58:59.920290 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:00.420775 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:00.920254 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:01.420633 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:01.920794 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:02.420958 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:02.920899 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:03.420311 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:03.920739 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:04.421016 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:04.920414 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:05.420561 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:05.920409 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:06.420727 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:06.920102 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:07.420070 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:07.920559 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:08.420780 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:08.920109 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:09.420050 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:09.920826 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:10.420960 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:10.920263 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:11.420329 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:11.920838 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:12.420168 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:12.920406 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:13.420519 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:13.920440 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:14.420568 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0728 20:59:14.420674 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0728 20:59:14.452004 160802 cri.go:87] found id: ""
I0728 20:59:14.452040 160802 logs.go:274] 0 containers: []
W0728 20:59:14.452052 160802 logs.go:276] No container was found matching "kube-apiserver"
I0728 20:59:14.452062 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0728 20:59:14.452138 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0728 20:59:14.482244 160802 cri.go:87] found id: ""
I0728 20:59:14.482270 160802 logs.go:274] 0 containers: []
W0728 20:59:14.482276 160802 logs.go:276] No container was found matching "etcd"
I0728 20:59:14.482283 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0728 20:59:14.482337 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0728 20:59:14.510585 160802 cri.go:87] found id: ""
I0728 20:59:14.510618 160802 logs.go:274] 0 containers: []
W0728 20:59:14.510629 160802 logs.go:276] No container was found matching "coredns"
I0728 20:59:14.510639 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0728 20:59:14.510714 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0728 20:59:14.539772 160802 cri.go:87] found id: ""
I0728 20:59:14.539803 160802 logs.go:274] 0 containers: []
W0728 20:59:14.539817 160802 logs.go:276] No container was found matching "kube-scheduler"
I0728 20:59:14.539826 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0728 20:59:14.539894 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0728 20:59:14.568212 160802 cri.go:87] found id: ""
I0728 20:59:14.568243 160802 logs.go:274] 0 containers: []
W0728 20:59:14.568251 160802 logs.go:276] No container was found matching "kube-proxy"
I0728 20:59:14.568260 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0728 20:59:14.568324 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0728 20:59:14.598386 160802 cri.go:87] found id: ""
I0728 20:59:14.598416 160802 logs.go:274] 0 containers: []
W0728 20:59:14.598425 160802 logs.go:276] No container was found matching "kubernetes-dashboard"
I0728 20:59:14.598433 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0728 20:59:14.598495 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0728 20:59:14.628899 160802 cri.go:87] found id: ""
I0728 20:59:14.628929 160802 logs.go:274] 0 containers: []
W0728 20:59:14.628939 160802 logs.go:276] No container was found matching "storage-provisioner"
I0728 20:59:14.628947 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0728 20:59:14.629005 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0728 20:59:14.660805 160802 cri.go:87] found id: ""
I0728 20:59:14.660839 160802 logs.go:274] 0 containers: []
W0728 20:59:14.660849 160802 logs.go:276] No container was found matching "kube-controller-manager"
I0728 20:59:14.660860 160802 logs.go:123] Gathering logs for kubelet ...
I0728 20:59:14.660876 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0728 20:59:14.737395 160802 logs.go:138] Found kubelet problem: Jul 28 20:59:14 kubernetes-upgrade-20220728205630-9812 kubelet[2333]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 20:59:14.787836 160802 logs.go:123] Gathering logs for dmesg ...
I0728 20:59:14.787880 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0728 20:59:14.804357 160802 logs.go:123] Gathering logs for describe nodes ...
I0728 20:59:14.804410 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0728 20:59:14.864241 160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0728 20:59:14.864270 160802 logs.go:123] Gathering logs for containerd ...
I0728 20:59:14.864281 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0728 20:59:14.901267 160802 logs.go:123] Gathering logs for container status ...
I0728 20:59:14.901323 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0728 20:59:14.931359 160802 out.go:309] Setting ErrFile to fd 2...
I0728 20:59:14.931389 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0728 20:59:14.931506 160802 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0728 20:59:14.931524 160802 out.go:239] Jul 28 20:59:14 kubernetes-upgrade-20220728205630-9812 kubelet[2333]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 28 20:59:14 kubernetes-upgrade-20220728205630-9812 kubelet[2333]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 20:59:14.931540 160802 out.go:309] Setting ErrFile to fd 2...
I0728 20:59:14.931548 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 20:59:24.932473 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:25.420239 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0728 20:59:25.420335 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0728 20:59:25.450354 160802 cri.go:87] found id: ""
I0728 20:59:25.450389 160802 logs.go:274] 0 containers: []
W0728 20:59:25.450398 160802 logs.go:276] No container was found matching "kube-apiserver"
I0728 20:59:25.450407 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0728 20:59:25.450466 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0728 20:59:25.477734 160802 cri.go:87] found id: ""
I0728 20:59:25.477767 160802 logs.go:274] 0 containers: []
W0728 20:59:25.477777 160802 logs.go:276] No container was found matching "etcd"
I0728 20:59:25.477785 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0728 20:59:25.477844 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0728 20:59:25.503937 160802 cri.go:87] found id: ""
I0728 20:59:25.503968 160802 logs.go:274] 0 containers: []
W0728 20:59:25.503976 160802 logs.go:276] No container was found matching "coredns"
I0728 20:59:25.503984 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0728 20:59:25.504040 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0728 20:59:25.531863 160802 cri.go:87] found id: ""
I0728 20:59:25.531900 160802 logs.go:274] 0 containers: []
W0728 20:59:25.531907 160802 logs.go:276] No container was found matching "kube-scheduler"
I0728 20:59:25.531914 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0728 20:59:25.531963 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0728 20:59:25.561117 160802 cri.go:87] found id: ""
I0728 20:59:25.561149 160802 logs.go:274] 0 containers: []
W0728 20:59:25.561158 160802 logs.go:276] No container was found matching "kube-proxy"
I0728 20:59:25.561166 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0728 20:59:25.561224 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0728 20:59:25.589069 160802 cri.go:87] found id: ""
I0728 20:59:25.589103 160802 logs.go:274] 0 containers: []
W0728 20:59:25.589113 160802 logs.go:276] No container was found matching "kubernetes-dashboard"
I0728 20:59:25.589121 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0728 20:59:25.589184 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0728 20:59:25.621490 160802 cri.go:87] found id: ""
I0728 20:59:25.621519 160802 logs.go:274] 0 containers: []
W0728 20:59:25.621529 160802 logs.go:276] No container was found matching "storage-provisioner"
I0728 20:59:25.621539 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0728 20:59:25.621596 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0728 20:59:25.653547 160802 cri.go:87] found id: ""
I0728 20:59:25.653578 160802 logs.go:274] 0 containers: []
W0728 20:59:25.653587 160802 logs.go:276] No container was found matching "kube-controller-manager"
I0728 20:59:25.653598 160802 logs.go:123] Gathering logs for kubelet ...
I0728 20:59:25.653615 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0728 20:59:25.701614 160802 logs.go:138] Found kubelet problem: Jul 28 20:59:25 kubernetes-upgrade-20220728205630-9812 kubelet[2713]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 20:59:25.747589 160802 logs.go:123] Gathering logs for dmesg ...
I0728 20:59:25.747635 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0728 20:59:25.765145 160802 logs.go:123] Gathering logs for describe nodes ...
I0728 20:59:25.765184 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0728 20:59:25.823688 160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0728 20:59:25.823717 160802 logs.go:123] Gathering logs for containerd ...
I0728 20:59:25.823731 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0728 20:59:25.861132 160802 logs.go:123] Gathering logs for container status ...
I0728 20:59:25.861181 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0728 20:59:25.892229 160802 out.go:309] Setting ErrFile to fd 2...
I0728 20:59:25.892257 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0728 20:59:25.892385 160802 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0728 20:59:25.892401 160802 out.go:239] Jul 28 20:59:25 kubernetes-upgrade-20220728205630-9812 kubelet[2713]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 28 20:59:25 kubernetes-upgrade-20220728205630-9812 kubelet[2713]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 20:59:25.892406 160802 out.go:309] Setting ErrFile to fd 2...
I0728 20:59:25.892411 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 20:59:35.894599 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:35.920357 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0728 20:59:35.920505 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0728 20:59:35.948405 160802 cri.go:87] found id: ""
I0728 20:59:35.948430 160802 logs.go:274] 0 containers: []
W0728 20:59:35.948436 160802 logs.go:276] No container was found matching "kube-apiserver"
I0728 20:59:35.948443 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0728 20:59:35.948508 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0728 20:59:35.976431 160802 cri.go:87] found id: ""
I0728 20:59:35.976462 160802 logs.go:274] 0 containers: []
W0728 20:59:35.976470 160802 logs.go:276] No container was found matching "etcd"
I0728 20:59:35.976477 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0728 20:59:35.976538 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0728 20:59:36.005563 160802 cri.go:87] found id: ""
I0728 20:59:36.005589 160802 logs.go:274] 0 containers: []
W0728 20:59:36.005595 160802 logs.go:276] No container was found matching "coredns"
I0728 20:59:36.005602 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0728 20:59:36.005649 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0728 20:59:36.033705 160802 cri.go:87] found id: ""
I0728 20:59:36.033734 160802 logs.go:274] 0 containers: []
W0728 20:59:36.033740 160802 logs.go:276] No container was found matching "kube-scheduler"
I0728 20:59:36.033745 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0728 20:59:36.033799 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0728 20:59:36.062919 160802 cri.go:87] found id: ""
I0728 20:59:36.062953 160802 logs.go:274] 0 containers: []
W0728 20:59:36.062962 160802 logs.go:276] No container was found matching "kube-proxy"
I0728 20:59:36.062972 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0728 20:59:36.063034 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0728 20:59:36.090984 160802 cri.go:87] found id: ""
I0728 20:59:36.091021 160802 logs.go:274] 0 containers: []
W0728 20:59:36.091031 160802 logs.go:276] No container was found matching "kubernetes-dashboard"
I0728 20:59:36.091040 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0728 20:59:36.091102 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0728 20:59:36.119742 160802 cri.go:87] found id: ""
I0728 20:59:36.119775 160802 logs.go:274] 0 containers: []
W0728 20:59:36.119784 160802 logs.go:276] No container was found matching "storage-provisioner"
I0728 20:59:36.119797 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0728 20:59:36.119858 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0728 20:59:36.156979 160802 cri.go:87] found id: ""
I0728 20:59:36.157012 160802 logs.go:274] 0 containers: []
W0728 20:59:36.157022 160802 logs.go:276] No container was found matching "kube-controller-manager"
I0728 20:59:36.157035 160802 logs.go:123] Gathering logs for containerd ...
I0728 20:59:36.157051 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0728 20:59:36.201189 160802 logs.go:123] Gathering logs for container status ...
I0728 20:59:36.201245 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0728 20:59:36.239188 160802 logs.go:123] Gathering logs for kubelet ...
I0728 20:59:36.239224 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0728 20:59:36.290569 160802 logs.go:138] Found kubelet problem: Jul 28 20:59:36 kubernetes-upgrade-20220728205630-9812 kubelet[3010]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 20:59:36.339953 160802 logs.go:123] Gathering logs for dmesg ...
I0728 20:59:36.339992 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0728 20:59:36.359173 160802 logs.go:123] Gathering logs for describe nodes ...
I0728 20:59:36.359208 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0728 20:59:36.415878 160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0728 20:59:36.415910 160802 out.go:309] Setting ErrFile to fd 2...
I0728 20:59:36.415924 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0728 20:59:36.416070 160802 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0728 20:59:36.416088 160802 out.go:239] Jul 28 20:59:36 kubernetes-upgrade-20220728205630-9812 kubelet[3010]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 28 20:59:36 kubernetes-upgrade-20220728205630-9812 kubelet[3010]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 20:59:36.416097 160802 out.go:309] Setting ErrFile to fd 2...
I0728 20:59:36.416102 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 20:59:46.417052 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:46.920709 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0728 20:59:46.920800 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0728 20:59:46.958993 160802 cri.go:87] found id: ""
I0728 20:59:46.959021 160802 logs.go:274] 0 containers: []
W0728 20:59:46.959029 160802 logs.go:276] No container was found matching "kube-apiserver"
I0728 20:59:46.959038 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0728 20:59:46.959113 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0728 20:59:46.991959 160802 cri.go:87] found id: ""
I0728 20:59:46.991989 160802 logs.go:274] 0 containers: []
W0728 20:59:46.992002 160802 logs.go:276] No container was found matching "etcd"
I0728 20:59:46.992009 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0728 20:59:46.992069 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0728 20:59:47.023763 160802 cri.go:87] found id: ""
I0728 20:59:47.023796 160802 logs.go:274] 0 containers: []
W0728 20:59:47.023806 160802 logs.go:276] No container was found matching "coredns"
I0728 20:59:47.023816 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0728 20:59:47.023876 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0728 20:59:47.060629 160802 cri.go:87] found id: ""
I0728 20:59:47.060659 160802 logs.go:274] 0 containers: []
W0728 20:59:47.060668 160802 logs.go:276] No container was found matching "kube-scheduler"
I0728 20:59:47.060677 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0728 20:59:47.060733 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0728 20:59:47.091513 160802 cri.go:87] found id: ""
I0728 20:59:47.091546 160802 logs.go:274] 0 containers: []
W0728 20:59:47.091557 160802 logs.go:276] No container was found matching "kube-proxy"
I0728 20:59:47.091566 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0728 20:59:47.091628 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0728 20:59:47.125658 160802 cri.go:87] found id: ""
I0728 20:59:47.125689 160802 logs.go:274] 0 containers: []
W0728 20:59:47.125698 160802 logs.go:276] No container was found matching "kubernetes-dashboard"
I0728 20:59:47.125707 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0728 20:59:47.125769 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0728 20:59:47.161864 160802 cri.go:87] found id: ""
I0728 20:59:47.161895 160802 logs.go:274] 0 containers: []
W0728 20:59:47.161905 160802 logs.go:276] No container was found matching "storage-provisioner"
I0728 20:59:47.161913 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0728 20:59:47.161968 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0728 20:59:47.196227 160802 cri.go:87] found id: ""
I0728 20:59:47.196259 160802 logs.go:274] 0 containers: []
W0728 20:59:47.196268 160802 logs.go:276] No container was found matching "kube-controller-manager"
I0728 20:59:47.196281 160802 logs.go:123] Gathering logs for kubelet ...
I0728 20:59:47.196296 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0728 20:59:47.269877 160802 logs.go:138] Found kubelet problem: Jul 28 20:59:46 kubernetes-upgrade-20220728205630-9812 kubelet[3231]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 20:59:47.337001 160802 logs.go:123] Gathering logs for dmesg ...
I0728 20:59:47.337054 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0728 20:59:47.359286 160802 logs.go:123] Gathering logs for describe nodes ...
I0728 20:59:47.359345 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0728 20:59:47.440070 160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0728 20:59:47.440099 160802 logs.go:123] Gathering logs for containerd ...
I0728 20:59:47.440114 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0728 20:59:47.493104 160802 logs.go:123] Gathering logs for container status ...
I0728 20:59:47.493146 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0728 20:59:47.528049 160802 out.go:309] Setting ErrFile to fd 2...
I0728 20:59:47.528076 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0728 20:59:47.528204 160802 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0728 20:59:47.528224 160802 out.go:239] Jul 28 20:59:46 kubernetes-upgrade-20220728205630-9812 kubelet[3231]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 28 20:59:46 kubernetes-upgrade-20220728205630-9812 kubelet[3231]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 20:59:47.528233 160802 out.go:309] Setting ErrFile to fd 2...
I0728 20:59:47.528241 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 20:59:57.529695 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:59:57.920472 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0728 20:59:57.920603 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0728 20:59:57.960187 160802 cri.go:87] found id: ""
I0728 20:59:57.960222 160802 logs.go:274] 0 containers: []
W0728 20:59:57.960231 160802 logs.go:276] No container was found matching "kube-apiserver"
I0728 20:59:57.960240 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0728 20:59:57.960304 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0728 20:59:58.001595 160802 cri.go:87] found id: ""
I0728 20:59:58.001626 160802 logs.go:274] 0 containers: []
W0728 20:59:58.001635 160802 logs.go:276] No container was found matching "etcd"
I0728 20:59:58.001644 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0728 20:59:58.001717 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0728 20:59:58.041548 160802 cri.go:87] found id: ""
I0728 20:59:58.041577 160802 logs.go:274] 0 containers: []
W0728 20:59:58.041586 160802 logs.go:276] No container was found matching "coredns"
I0728 20:59:58.041594 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0728 20:59:58.041661 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0728 20:59:58.085539 160802 cri.go:87] found id: ""
I0728 20:59:58.085567 160802 logs.go:274] 0 containers: []
W0728 20:59:58.085576 160802 logs.go:276] No container was found matching "kube-scheduler"
I0728 20:59:58.085585 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0728 20:59:58.085651 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0728 20:59:58.123384 160802 cri.go:87] found id: ""
I0728 20:59:58.123413 160802 logs.go:274] 0 containers: []
W0728 20:59:58.123423 160802 logs.go:276] No container was found matching "kube-proxy"
I0728 20:59:58.123432 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0728 20:59:58.123492 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0728 20:59:58.163432 160802 cri.go:87] found id: ""
I0728 20:59:58.163461 160802 logs.go:274] 0 containers: []
W0728 20:59:58.163470 160802 logs.go:276] No container was found matching "kubernetes-dashboard"
I0728 20:59:58.163480 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0728 20:59:58.163548 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0728 20:59:58.203457 160802 cri.go:87] found id: ""
I0728 20:59:58.203487 160802 logs.go:274] 0 containers: []
W0728 20:59:58.203497 160802 logs.go:276] No container was found matching "storage-provisioner"
I0728 20:59:58.203507 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0728 20:59:58.203566 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0728 20:59:58.241148 160802 cri.go:87] found id: ""
I0728 20:59:58.241179 160802 logs.go:274] 0 containers: []
W0728 20:59:58.241190 160802 logs.go:276] No container was found matching "kube-controller-manager"
I0728 20:59:58.241203 160802 logs.go:123] Gathering logs for kubelet ...
I0728 20:59:58.241219 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0728 20:59:58.311357 160802 logs.go:138] Found kubelet problem: Jul 28 20:59:57 kubernetes-upgrade-20220728205630-9812 kubelet[3525]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 20:59:58.374801 160802 logs.go:123] Gathering logs for dmesg ...
I0728 20:59:58.374838 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0728 20:59:58.395136 160802 logs.go:123] Gathering logs for describe nodes ...
I0728 20:59:58.395174 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0728 20:59:58.462065 160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0728 20:59:58.462097 160802 logs.go:123] Gathering logs for containerd ...
I0728 20:59:58.462110 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0728 20:59:58.517504 160802 logs.go:123] Gathering logs for container status ...
I0728 20:59:58.517558 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0728 20:59:58.559603 160802 out.go:309] Setting ErrFile to fd 2...
I0728 20:59:58.559641 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0728 20:59:58.559791 160802 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0728 20:59:58.559818 160802 out.go:239] Jul 28 20:59:57 kubernetes-upgrade-20220728205630-9812 kubelet[3525]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 28 20:59:57 kubernetes-upgrade-20220728205630-9812 kubelet[3525]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 20:59:58.559827 160802 out.go:309] Setting ErrFile to fd 2...
I0728 20:59:58.559837 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 21:00:08.560745 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 21:00:08.920525 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0728 21:00:08.920620 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0728 21:00:08.953176 160802 cri.go:87] found id: ""
I0728 21:00:08.953205 160802 logs.go:274] 0 containers: []
W0728 21:00:08.953215 160802 logs.go:276] No container was found matching "kube-apiserver"
I0728 21:00:08.953222 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0728 21:00:08.953285 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0728 21:00:08.980208 160802 cri.go:87] found id: ""
I0728 21:00:08.980237 160802 logs.go:274] 0 containers: []
W0728 21:00:08.980244 160802 logs.go:276] No container was found matching "etcd"
I0728 21:00:08.980252 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0728 21:00:08.980318 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0728 21:00:09.007253 160802 cri.go:87] found id: ""
I0728 21:00:09.007279 160802 logs.go:274] 0 containers: []
W0728 21:00:09.007287 160802 logs.go:276] No container was found matching "coredns"
I0728 21:00:09.007293 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0728 21:00:09.007357 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0728 21:00:09.034922 160802 cri.go:87] found id: ""
I0728 21:00:09.034958 160802 logs.go:274] 0 containers: []
W0728 21:00:09.034965 160802 logs.go:276] No container was found matching "kube-scheduler"
I0728 21:00:09.034971 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0728 21:00:09.035023 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0728 21:00:09.067542 160802 cri.go:87] found id: ""
I0728 21:00:09.067570 160802 logs.go:274] 0 containers: []
W0728 21:00:09.067577 160802 logs.go:276] No container was found matching "kube-proxy"
I0728 21:00:09.067584 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0728 21:00:09.067640 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0728 21:00:09.100488 160802 cri.go:87] found id: ""
I0728 21:00:09.100603 160802 logs.go:274] 0 containers: []
W0728 21:00:09.100621 160802 logs.go:276] No container was found matching "kubernetes-dashboard"
I0728 21:00:09.100632 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0728 21:00:09.100703 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0728 21:00:09.134574 160802 cri.go:87] found id: ""
I0728 21:00:09.134607 160802 logs.go:274] 0 containers: []
W0728 21:00:09.134621 160802 logs.go:276] No container was found matching "storage-provisioner"
I0728 21:00:09.134630 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0728 21:00:09.134692 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0728 21:00:09.167350 160802 cri.go:87] found id: ""
I0728 21:00:09.167383 160802 logs.go:274] 0 containers: []
W0728 21:00:09.167392 160802 logs.go:276] No container was found matching "kube-controller-manager"
I0728 21:00:09.167405 160802 logs.go:123] Gathering logs for dmesg ...
I0728 21:00:09.167424 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0728 21:00:09.185295 160802 logs.go:123] Gathering logs for describe nodes ...
I0728 21:00:09.185345 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0728 21:00:09.264782 160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0728 21:00:09.264814 160802 logs.go:123] Gathering logs for containerd ...
I0728 21:00:09.264826 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0728 21:00:09.307311 160802 logs.go:123] Gathering logs for container status ...
I0728 21:00:09.307349 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0728 21:00:09.338585 160802 logs.go:123] Gathering logs for kubelet ...
I0728 21:00:09.338618 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0728 21:00:09.386957 160802 logs.go:138] Found kubelet problem: Jul 28 21:00:09 kubernetes-upgrade-20220728205630-9812 kubelet[3892]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:00:09.433592 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:00:09.433631 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0728 21:00:09.433760 160802 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0728 21:00:09.433780 160802 out.go:239] Jul 28 21:00:09 kubernetes-upgrade-20220728205630-9812 kubelet[3892]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 28 21:00:09 kubernetes-upgrade-20220728205630-9812 kubelet[3892]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:00:09.433786 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:00:09.433794 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 21:00:19.435239 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 21:00:19.920821 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0728 21:00:19.920895 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0728 21:00:19.948271 160802 cri.go:87] found id: ""
I0728 21:00:19.948304 160802 logs.go:274] 0 containers: []
W0728 21:00:19.948313 160802 logs.go:276] No container was found matching "kube-apiserver"
I0728 21:00:19.948319 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0728 21:00:19.948373 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0728 21:00:19.974693 160802 cri.go:87] found id: ""
I0728 21:00:19.974721 160802 logs.go:274] 0 containers: []
W0728 21:00:19.974731 160802 logs.go:276] No container was found matching "etcd"
I0728 21:00:19.974740 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0728 21:00:19.974794 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0728 21:00:20.000482 160802 cri.go:87] found id: ""
I0728 21:00:20.000512 160802 logs.go:274] 0 containers: []
W0728 21:00:20.000519 160802 logs.go:276] No container was found matching "coredns"
I0728 21:00:20.000525 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0728 21:00:20.000572 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0728 21:00:20.027484 160802 cri.go:87] found id: ""
I0728 21:00:20.027518 160802 logs.go:274] 0 containers: []
W0728 21:00:20.027527 160802 logs.go:276] No container was found matching "kube-scheduler"
I0728 21:00:20.027535 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0728 21:00:20.027592 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0728 21:00:20.055223 160802 cri.go:87] found id: ""
I0728 21:00:20.055266 160802 logs.go:274] 0 containers: []
W0728 21:00:20.055274 160802 logs.go:276] No container was found matching "kube-proxy"
I0728 21:00:20.055280 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0728 21:00:20.055337 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0728 21:00:20.084863 160802 cri.go:87] found id: ""
I0728 21:00:20.084887 160802 logs.go:274] 0 containers: []
W0728 21:00:20.084894 160802 logs.go:276] No container was found matching "kubernetes-dashboard"
I0728 21:00:20.084901 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0728 21:00:20.084958 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0728 21:00:20.110692 160802 cri.go:87] found id: ""
I0728 21:00:20.110721 160802 logs.go:274] 0 containers: []
W0728 21:00:20.110727 160802 logs.go:276] No container was found matching "storage-provisioner"
I0728 21:00:20.110734 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0728 21:00:20.110780 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0728 21:00:20.137519 160802 cri.go:87] found id: ""
I0728 21:00:20.137544 160802 logs.go:274] 0 containers: []
W0728 21:00:20.137550 160802 logs.go:276] No container was found matching "kube-controller-manager"
I0728 21:00:20.137560 160802 logs.go:123] Gathering logs for describe nodes ...
I0728 21:00:20.137577 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0728 21:00:20.193399 160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0728 21:00:20.193426 160802 logs.go:123] Gathering logs for containerd ...
I0728 21:00:20.193440 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0728 21:00:20.231887 160802 logs.go:123] Gathering logs for container status ...
I0728 21:00:20.231932 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0728 21:00:20.262413 160802 logs.go:123] Gathering logs for kubelet ...
I0728 21:00:20.262441 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0728 21:00:20.314318 160802 logs.go:138] Found kubelet problem: Jul 28 21:00:19 kubernetes-upgrade-20220728205630-9812 kubelet[4116]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:00:20.363729 160802 logs.go:123] Gathering logs for dmesg ...
I0728 21:00:20.363775 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0728 21:00:20.380775 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:00:20.380812 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0728 21:00:20.380979 160802 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0728 21:00:20.381036 160802 out.go:239] Jul 28 21:00:19 kubernetes-upgrade-20220728205630-9812 kubelet[4116]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 28 21:00:19 kubernetes-upgrade-20220728205630-9812 kubelet[4116]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:00:20.381048 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:00:20.381056 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 21:00:30.382727 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 21:00:30.420532 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0728 21:00:30.420616 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0728 21:00:30.446552 160802 cri.go:87] found id: ""
I0728 21:00:30.446580 160802 logs.go:274] 0 containers: []
W0728 21:00:30.446586 160802 logs.go:276] No container was found matching "kube-apiserver"
I0728 21:00:30.446595 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0728 21:00:30.446676 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0728 21:00:30.473419 160802 cri.go:87] found id: ""
I0728 21:00:30.473448 160802 logs.go:274] 0 containers: []
W0728 21:00:30.473456 160802 logs.go:276] No container was found matching "etcd"
I0728 21:00:30.473463 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0728 21:00:30.473519 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0728 21:00:30.498960 160802 cri.go:87] found id: ""
I0728 21:00:30.498997 160802 logs.go:274] 0 containers: []
W0728 21:00:30.499004 160802 logs.go:276] No container was found matching "coredns"
I0728 21:00:30.499010 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0728 21:00:30.499068 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0728 21:00:30.524213 160802 cri.go:87] found id: ""
I0728 21:00:30.524240 160802 logs.go:274] 0 containers: []
W0728 21:00:30.524247 160802 logs.go:276] No container was found matching "kube-scheduler"
I0728 21:00:30.524253 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0728 21:00:30.524313 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0728 21:00:30.551794 160802 cri.go:87] found id: ""
I0728 21:00:30.551823 160802 logs.go:274] 0 containers: []
W0728 21:00:30.551830 160802 logs.go:276] No container was found matching "kube-proxy"
I0728 21:00:30.551837 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0728 21:00:30.551889 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0728 21:00:30.583852 160802 cri.go:87] found id: ""
I0728 21:00:30.583884 160802 logs.go:274] 0 containers: []
W0728 21:00:30.583893 160802 logs.go:276] No container was found matching "kubernetes-dashboard"
I0728 21:00:30.583906 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0728 21:00:30.583965 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0728 21:00:30.608969 160802 cri.go:87] found id: ""
I0728 21:00:30.608993 160802 logs.go:274] 0 containers: []
W0728 21:00:30.609002 160802 logs.go:276] No container was found matching "storage-provisioner"
I0728 21:00:30.609014 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0728 21:00:30.609067 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0728 21:00:30.635438 160802 cri.go:87] found id: ""
I0728 21:00:30.635468 160802 logs.go:274] 0 containers: []
W0728 21:00:30.635477 160802 logs.go:276] No container was found matching "kube-controller-manager"
I0728 21:00:30.635487 160802 logs.go:123] Gathering logs for kubelet ...
I0728 21:00:30.635503 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0728 21:00:30.687716 160802 logs.go:138] Found kubelet problem: Jul 28 21:00:30 kubernetes-upgrade-20220728205630-9812 kubelet[4419]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:00:30.746740 160802 logs.go:123] Gathering logs for dmesg ...
I0728 21:00:30.746787 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0728 21:00:30.763153 160802 logs.go:123] Gathering logs for describe nodes ...
I0728 21:00:30.763203 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0728 21:00:30.815641 160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0728 21:00:30.815676 160802 logs.go:123] Gathering logs for containerd ...
I0728 21:00:30.815692 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0728 21:00:30.852829 160802 logs.go:123] Gathering logs for container status ...
I0728 21:00:30.852877 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0728 21:00:30.883637 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:00:30.883663 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0728 21:00:30.883773 160802 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0728 21:00:30.883789 160802 out.go:239] Jul 28 21:00:30 kubernetes-upgrade-20220728205630-9812 kubelet[4419]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 28 21:00:30 kubernetes-upgrade-20220728205630-9812 kubelet[4419]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:00:30.883807 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:00:30.883815 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 21:00:40.885069 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 21:00:40.920063 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0728 21:00:40.920165 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0728 21:00:40.946715 160802 cri.go:87] found id: ""
I0728 21:00:40.946747 160802 logs.go:274] 0 containers: []
W0728 21:00:40.946755 160802 logs.go:276] No container was found matching "kube-apiserver"
I0728 21:00:40.946762 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0728 21:00:40.946815 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0728 21:00:40.974609 160802 cri.go:87] found id: ""
I0728 21:00:40.974635 160802 logs.go:274] 0 containers: []
W0728 21:00:40.974644 160802 logs.go:276] No container was found matching "etcd"
I0728 21:00:40.974652 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0728 21:00:40.974707 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0728 21:00:41.000572 160802 cri.go:87] found id: ""
I0728 21:00:41.000600 160802 logs.go:274] 0 containers: []
W0728 21:00:41.000607 160802 logs.go:276] No container was found matching "coredns"
I0728 21:00:41.000614 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0728 21:00:41.000672 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0728 21:00:41.026665 160802 cri.go:87] found id: ""
I0728 21:00:41.026696 160802 logs.go:274] 0 containers: []
W0728 21:00:41.026705 160802 logs.go:276] No container was found matching "kube-scheduler"
I0728 21:00:41.026712 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0728 21:00:41.026769 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0728 21:00:41.052800 160802 cri.go:87] found id: ""
I0728 21:00:41.052832 160802 logs.go:274] 0 containers: []
W0728 21:00:41.052842 160802 logs.go:276] No container was found matching "kube-proxy"
I0728 21:00:41.052851 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0728 21:00:41.052911 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0728 21:00:41.078370 160802 cri.go:87] found id: ""
I0728 21:00:41.078396 160802 logs.go:274] 0 containers: []
W0728 21:00:41.078403 160802 logs.go:276] No container was found matching "kubernetes-dashboard"
I0728 21:00:41.078410 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0728 21:00:41.078455 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0728 21:00:41.105120 160802 cri.go:87] found id: ""
I0728 21:00:41.105150 160802 logs.go:274] 0 containers: []
W0728 21:00:41.105159 160802 logs.go:276] No container was found matching "storage-provisioner"
I0728 21:00:41.105167 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0728 21:00:41.105223 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0728 21:00:41.131835 160802 cri.go:87] found id: ""
I0728 21:00:41.131869 160802 logs.go:274] 0 containers: []
W0728 21:00:41.131878 160802 logs.go:276] No container was found matching "kube-controller-manager"
I0728 21:00:41.131889 160802 logs.go:123] Gathering logs for kubelet ...
I0728 21:00:41.131904 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0728 21:00:41.183915 160802 logs.go:138] Found kubelet problem: Jul 28 21:00:40 kubernetes-upgrade-20220728205630-9812 kubelet[4708]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:00:41.230010 160802 logs.go:123] Gathering logs for dmesg ...
I0728 21:00:41.230053 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0728 21:00:41.246565 160802 logs.go:123] Gathering logs for describe nodes ...
I0728 21:00:41.246613 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0728 21:00:41.301005 160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0728 21:00:41.301036 160802 logs.go:123] Gathering logs for containerd ...
I0728 21:00:41.301048 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0728 21:00:41.339281 160802 logs.go:123] Gathering logs for container status ...
I0728 21:00:41.339331 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0728 21:00:41.369888 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:00:41.369914 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0728 21:00:41.370013 160802 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0728 21:00:41.370026 160802 out.go:239] Jul 28 21:00:40 kubernetes-upgrade-20220728205630-9812 kubelet[4708]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 28 21:00:40 kubernetes-upgrade-20220728205630-9812 kubelet[4708]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:00:41.370035 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:00:41.370040 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 21:00:51.372050 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 21:00:51.420567 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0728 21:00:51.420678 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0728 21:00:51.447149 160802 cri.go:87] found id: ""
I0728 21:00:51.447171 160802 logs.go:274] 0 containers: []
W0728 21:00:51.447178 160802 logs.go:276] No container was found matching "kube-apiserver"
I0728 21:00:51.447185 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0728 21:00:51.447241 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0728 21:00:51.473508 160802 cri.go:87] found id: ""
I0728 21:00:51.473539 160802 logs.go:274] 0 containers: []
W0728 21:00:51.473547 160802 logs.go:276] No container was found matching "etcd"
I0728 21:00:51.473556 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0728 21:00:51.473614 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0728 21:00:51.500236 160802 cri.go:87] found id: ""
I0728 21:00:51.500264 160802 logs.go:274] 0 containers: []
W0728 21:00:51.500274 160802 logs.go:276] No container was found matching "coredns"
I0728 21:00:51.500281 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0728 21:00:51.500339 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0728 21:00:51.526468 160802 cri.go:87] found id: ""
I0728 21:00:51.526500 160802 logs.go:274] 0 containers: []
W0728 21:00:51.526511 160802 logs.go:276] No container was found matching "kube-scheduler"
I0728 21:00:51.526519 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0728 21:00:51.526568 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0728 21:00:51.552901 160802 cri.go:87] found id: ""
I0728 21:00:51.552930 160802 logs.go:274] 0 containers: []
W0728 21:00:51.552937 160802 logs.go:276] No container was found matching "kube-proxy"
I0728 21:00:51.552954 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0728 21:00:51.553011 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0728 21:00:51.579679 160802 cri.go:87] found id: ""
I0728 21:00:51.579709 160802 logs.go:274] 0 containers: []
W0728 21:00:51.579715 160802 logs.go:276] No container was found matching "kubernetes-dashboard"
I0728 21:00:51.579721 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0728 21:00:51.579773 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0728 21:00:51.604886 160802 cri.go:87] found id: ""
I0728 21:00:51.604917 160802 logs.go:274] 0 containers: []
W0728 21:00:51.604925 160802 logs.go:276] No container was found matching "storage-provisioner"
I0728 21:00:51.604934 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0728 21:00:51.604986 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0728 21:00:51.630094 160802 cri.go:87] found id: ""
I0728 21:00:51.630120 160802 logs.go:274] 0 containers: []
W0728 21:00:51.630130 160802 logs.go:276] No container was found matching "kube-controller-manager"
I0728 21:00:51.630142 160802 logs.go:123] Gathering logs for kubelet ...
I0728 21:00:51.630158 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0728 21:00:51.676727 160802 logs.go:138] Found kubelet problem: Jul 28 21:00:51 kubernetes-upgrade-20220728205630-9812 kubelet[5005]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:00:51.722849 160802 logs.go:123] Gathering logs for dmesg ...
I0728 21:00:51.722910 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0728 21:00:51.738320 160802 logs.go:123] Gathering logs for describe nodes ...
I0728 21:00:51.738355 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0728 21:00:51.792683 160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0728 21:00:51.792712 160802 logs.go:123] Gathering logs for containerd ...
I0728 21:00:51.792727 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0728 21:00:51.829025 160802 logs.go:123] Gathering logs for container status ...
I0728 21:00:51.829078 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0728 21:00:51.857460 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:00:51.857493 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0728 21:00:51.857601 160802 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0728 21:00:51.857618 160802 out.go:239] Jul 28 21:00:51 kubernetes-upgrade-20220728205630-9812 kubelet[5005]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 28 21:00:51 kubernetes-upgrade-20220728205630-9812 kubelet[5005]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:00:51.857629 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:00:51.857636 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 21:01:01.859412 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 21:01:01.920827 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0728 21:01:01.920908 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0728 21:01:01.946905 160802 cri.go:87] found id: ""
I0728 21:01:01.946936 160802 logs.go:274] 0 containers: []
W0728 21:01:01.946946 160802 logs.go:276] No container was found matching "kube-apiserver"
I0728 21:01:01.946955 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0728 21:01:01.947014 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0728 21:01:01.972353 160802 cri.go:87] found id: ""
I0728 21:01:01.972377 160802 logs.go:274] 0 containers: []
W0728 21:01:01.972384 160802 logs.go:276] No container was found matching "etcd"
I0728 21:01:01.972390 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0728 21:01:01.972438 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0728 21:01:02.000643 160802 cri.go:87] found id: ""
I0728 21:01:02.000669 160802 logs.go:274] 0 containers: []
W0728 21:01:02.000676 160802 logs.go:276] No container was found matching "coredns"
I0728 21:01:02.000682 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0728 21:01:02.000727 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0728 21:01:02.029158 160802 cri.go:87] found id: ""
I0728 21:01:02.029202 160802 logs.go:274] 0 containers: []
W0728 21:01:02.029210 160802 logs.go:276] No container was found matching "kube-scheduler"
I0728 21:01:02.029217 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0728 21:01:02.029264 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0728 21:01:02.056503 160802 cri.go:87] found id: ""
I0728 21:01:02.056541 160802 logs.go:274] 0 containers: []
W0728 21:01:02.056551 160802 logs.go:276] No container was found matching "kube-proxy"
I0728 21:01:02.056561 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0728 21:01:02.056626 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0728 21:01:02.081798 160802 cri.go:87] found id: ""
I0728 21:01:02.081822 160802 logs.go:274] 0 containers: []
W0728 21:01:02.081829 160802 logs.go:276] No container was found matching "kubernetes-dashboard"
I0728 21:01:02.081836 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0728 21:01:02.081894 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0728 21:01:02.108138 160802 cri.go:87] found id: ""
I0728 21:01:02.108170 160802 logs.go:274] 0 containers: []
W0728 21:01:02.108179 160802 logs.go:276] No container was found matching "storage-provisioner"
I0728 21:01:02.108186 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0728 21:01:02.108235 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0728 21:01:02.133705 160802 cri.go:87] found id: ""
I0728 21:01:02.133736 160802 logs.go:274] 0 containers: []
W0728 21:01:02.133747 160802 logs.go:276] No container was found matching "kube-controller-manager"
I0728 21:01:02.133758 160802 logs.go:123] Gathering logs for container status ...
I0728 21:01:02.133773 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0728 21:01:02.163438 160802 logs.go:123] Gathering logs for kubelet ...
I0728 21:01:02.163469 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0728 21:01:02.211143 160802 logs.go:138] Found kubelet problem: Jul 28 21:01:01 kubernetes-upgrade-20220728205630-9812 kubelet[5302]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:01:02.262558 160802 logs.go:123] Gathering logs for dmesg ...
I0728 21:01:02.262602 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0728 21:01:02.280236 160802 logs.go:123] Gathering logs for describe nodes ...
I0728 21:01:02.280278 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0728 21:01:02.342071 160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0728 21:01:02.342124 160802 logs.go:123] Gathering logs for containerd ...
I0728 21:01:02.342134 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0728 21:01:02.382351 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:01:02.382390 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0728 21:01:02.382497 160802 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0728 21:01:02.382510 160802 out.go:239] Jul 28 21:01:01 kubernetes-upgrade-20220728205630-9812 kubelet[5302]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 28 21:01:01 kubernetes-upgrade-20220728205630-9812 kubelet[5302]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:01:02.382514 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:01:02.382519 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 21:01:12.383837 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 21:01:12.419952 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0728 21:01:12.420042 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0728 21:01:12.447292 160802 cri.go:87] found id: ""
I0728 21:01:12.447322 160802 logs.go:274] 0 containers: []
W0728 21:01:12.447332 160802 logs.go:276] No container was found matching "kube-apiserver"
I0728 21:01:12.447340 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0728 21:01:12.447396 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0728 21:01:12.474514 160802 cri.go:87] found id: ""
I0728 21:01:12.474548 160802 logs.go:274] 0 containers: []
W0728 21:01:12.474556 160802 logs.go:276] No container was found matching "etcd"
I0728 21:01:12.474563 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0728 21:01:12.474630 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0728 21:01:12.501032 160802 cri.go:87] found id: ""
I0728 21:01:12.501057 160802 logs.go:274] 0 containers: []
W0728 21:01:12.501066 160802 logs.go:276] No container was found matching "coredns"
I0728 21:01:12.501076 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0728 21:01:12.501135 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0728 21:01:12.528468 160802 cri.go:87] found id: ""
I0728 21:01:12.528492 160802 logs.go:274] 0 containers: []
W0728 21:01:12.528499 160802 logs.go:276] No container was found matching "kube-scheduler"
I0728 21:01:12.528506 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0728 21:01:12.528555 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0728 21:01:12.554585 160802 cri.go:87] found id: ""
I0728 21:01:12.554619 160802 logs.go:274] 0 containers: []
W0728 21:01:12.554628 160802 logs.go:276] No container was found matching "kube-proxy"
I0728 21:01:12.554636 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0728 21:01:12.554691 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0728 21:01:12.580527 160802 cri.go:87] found id: ""
I0728 21:01:12.580556 160802 logs.go:274] 0 containers: []
W0728 21:01:12.580565 160802 logs.go:276] No container was found matching "kubernetes-dashboard"
I0728 21:01:12.580574 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0728 21:01:12.580628 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0728 21:01:12.606243 160802 cri.go:87] found id: ""
I0728 21:01:12.606276 160802 logs.go:274] 0 containers: []
W0728 21:01:12.606285 160802 logs.go:276] No container was found matching "storage-provisioner"
I0728 21:01:12.606293 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0728 21:01:12.606340 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0728 21:01:12.633081 160802 cri.go:87] found id: ""
I0728 21:01:12.633113 160802 logs.go:274] 0 containers: []
W0728 21:01:12.633122 160802 logs.go:276] No container was found matching "kube-controller-manager"
I0728 21:01:12.633137 160802 logs.go:123] Gathering logs for kubelet ...
I0728 21:01:12.633152 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0728 21:01:12.682987 160802 logs.go:138] Found kubelet problem: Jul 28 21:01:12 kubernetes-upgrade-20220728205630-9812 kubelet[5597]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:01:12.729128 160802 logs.go:123] Gathering logs for dmesg ...
I0728 21:01:12.729180 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0728 21:01:12.745294 160802 logs.go:123] Gathering logs for describe nodes ...
I0728 21:01:12.745336 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0728 21:01:12.800671 160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0728 21:01:12.800695 160802 logs.go:123] Gathering logs for containerd ...
I0728 21:01:12.800707 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0728 21:01:12.839152 160802 logs.go:123] Gathering logs for container status ...
I0728 21:01:12.839216 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0728 21:01:12.867992 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:01:12.868811 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0728 21:01:12.868938 160802 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0728 21:01:12.868952 160802 out.go:239] Jul 28 21:01:12 kubernetes-upgrade-20220728205630-9812 kubelet[5597]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 28 21:01:12 kubernetes-upgrade-20220728205630-9812 kubelet[5597]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:01:12.868963 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:01:12.868969 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 21:01:22.870567 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 21:01:22.920615 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0728 21:01:22.920690 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0728 21:01:22.947277 160802 cri.go:87] found id: ""
I0728 21:01:22.947302 160802 logs.go:274] 0 containers: []
W0728 21:01:22.947308 160802 logs.go:276] No container was found matching "kube-apiserver"
I0728 21:01:22.947315 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0728 21:01:22.947365 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0728 21:01:22.974015 160802 cri.go:87] found id: ""
I0728 21:01:22.974047 160802 logs.go:274] 0 containers: []
W0728 21:01:22.974054 160802 logs.go:276] No container was found matching "etcd"
I0728 21:01:22.974061 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0728 21:01:22.974131 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0728 21:01:23.001666 160802 cri.go:87] found id: ""
I0728 21:01:23.001699 160802 logs.go:274] 0 containers: []
W0728 21:01:23.001706 160802 logs.go:276] No container was found matching "coredns"
I0728 21:01:23.001713 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0728 21:01:23.001761 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0728 21:01:23.027384 160802 cri.go:87] found id: ""
I0728 21:01:23.027415 160802 logs.go:274] 0 containers: []
W0728 21:01:23.027422 160802 logs.go:276] No container was found matching "kube-scheduler"
I0728 21:01:23.027428 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0728 21:01:23.027493 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0728 21:01:23.054676 160802 cri.go:87] found id: ""
I0728 21:01:23.054705 160802 logs.go:274] 0 containers: []
W0728 21:01:23.054723 160802 logs.go:276] No container was found matching "kube-proxy"
I0728 21:01:23.054733 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0728 21:01:23.054791 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0728 21:01:23.081094 160802 cri.go:87] found id: ""
I0728 21:01:23.081120 160802 logs.go:274] 0 containers: []
W0728 21:01:23.081127 160802 logs.go:276] No container was found matching "kubernetes-dashboard"
I0728 21:01:23.081135 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0728 21:01:23.081180 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0728 21:01:23.106469 160802 cri.go:87] found id: ""
I0728 21:01:23.106502 160802 logs.go:274] 0 containers: []
W0728 21:01:23.106512 160802 logs.go:276] No container was found matching "storage-provisioner"
I0728 21:01:23.106521 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0728 21:01:23.106583 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0728 21:01:23.133292 160802 cri.go:87] found id: ""
I0728 21:01:23.133319 160802 logs.go:274] 0 containers: []
W0728 21:01:23.133328 160802 logs.go:276] No container was found matching "kube-controller-manager"
I0728 21:01:23.133339 160802 logs.go:123] Gathering logs for dmesg ...
I0728 21:01:23.133356 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0728 21:01:23.149082 160802 logs.go:123] Gathering logs for describe nodes ...
I0728 21:01:23.149122 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0728 21:01:23.205713 160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0728 21:01:23.205743 160802 logs.go:123] Gathering logs for containerd ...
I0728 21:01:23.205755 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0728 21:01:23.247400 160802 logs.go:123] Gathering logs for container status ...
I0728 21:01:23.247445 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0728 21:01:23.277092 160802 logs.go:123] Gathering logs for kubelet ...
I0728 21:01:23.277121 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0728 21:01:23.323939 160802 logs.go:138] Found kubelet problem: Jul 28 21:01:22 kubernetes-upgrade-20220728205630-9812 kubelet[5894]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:01:23.369893 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:01:23.369929 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0728 21:01:23.370065 160802 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0728 21:01:23.370086 160802 out.go:239] Jul 28 21:01:22 kubernetes-upgrade-20220728205630-9812 kubelet[5894]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 28 21:01:22 kubernetes-upgrade-20220728205630-9812 kubelet[5894]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:01:23.370093 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:01:23.370102 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 21:01:33.370317 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 21:01:33.420579 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0728 21:01:33.420685 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0728 21:01:33.450569 160802 cri.go:87] found id: ""
I0728 21:01:33.450593 160802 logs.go:274] 0 containers: []
W0728 21:01:33.450599 160802 logs.go:276] No container was found matching "kube-apiserver"
I0728 21:01:33.450605 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0728 21:01:33.450652 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0728 21:01:33.481100 160802 cri.go:87] found id: ""
I0728 21:01:33.481127 160802 logs.go:274] 0 containers: []
W0728 21:01:33.481135 160802 logs.go:276] No container was found matching "etcd"
I0728 21:01:33.481143 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0728 21:01:33.481198 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0728 21:01:33.512490 160802 cri.go:87] found id: ""
I0728 21:01:33.512523 160802 logs.go:274] 0 containers: []
W0728 21:01:33.512532 160802 logs.go:276] No container was found matching "coredns"
I0728 21:01:33.512541 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0728 21:01:33.512603 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0728 21:01:33.543944 160802 cri.go:87] found id: ""
I0728 21:01:33.543974 160802 logs.go:274] 0 containers: []
W0728 21:01:33.543983 160802 logs.go:276] No container was found matching "kube-scheduler"
I0728 21:01:33.543991 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0728 21:01:33.544055 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0728 21:01:33.575014 160802 cri.go:87] found id: ""
I0728 21:01:33.575045 160802 logs.go:274] 0 containers: []
W0728 21:01:33.575054 160802 logs.go:276] No container was found matching "kube-proxy"
I0728 21:01:33.575063 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0728 21:01:33.575125 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0728 21:01:33.603099 160802 cri.go:87] found id: ""
I0728 21:01:33.603130 160802 logs.go:274] 0 containers: []
W0728 21:01:33.603140 160802 logs.go:276] No container was found matching "kubernetes-dashboard"
I0728 21:01:33.603149 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0728 21:01:33.603196 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0728 21:01:33.630296 160802 cri.go:87] found id: ""
I0728 21:01:33.630325 160802 logs.go:274] 0 containers: []
W0728 21:01:33.630332 160802 logs.go:276] No container was found matching "storage-provisioner"
I0728 21:01:33.630339 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0728 21:01:33.630387 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0728 21:01:33.657324 160802 cri.go:87] found id: ""
I0728 21:01:33.657356 160802 logs.go:274] 0 containers: []
W0728 21:01:33.657365 160802 logs.go:276] No container was found matching "kube-controller-manager"
I0728 21:01:33.657378 160802 logs.go:123] Gathering logs for kubelet ...
I0728 21:01:33.657392 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0728 21:01:33.705000 160802 logs.go:138] Found kubelet problem: Jul 28 21:01:33 kubernetes-upgrade-20220728205630-9812 kubelet[6192]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:01:33.754620 160802 logs.go:123] Gathering logs for dmesg ...
I0728 21:01:33.754665 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0728 21:01:33.770353 160802 logs.go:123] Gathering logs for describe nodes ...
I0728 21:01:33.770391 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0728 21:01:33.825700 160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0728 21:01:33.825737 160802 logs.go:123] Gathering logs for containerd ...
I0728 21:01:33.825751 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0728 21:01:33.863969 160802 logs.go:123] Gathering logs for container status ...
I0728 21:01:33.864013 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0728 21:01:33.893526 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:01:33.893556 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0728 21:01:33.893668 160802 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0728 21:01:33.893681 160802 out.go:239] Jul 28 21:01:33 kubernetes-upgrade-20220728205630-9812 kubelet[6192]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 28 21:01:33 kubernetes-upgrade-20220728205630-9812 kubelet[6192]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:01:33.893685 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:01:33.893690 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 21:01:43.895001 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 21:01:43.920602 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0728 21:01:43.920693 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0728 21:01:43.951291 160802 cri.go:87] found id: ""
I0728 21:01:43.951324 160802 logs.go:274] 0 containers: []
W0728 21:01:43.951335 160802 logs.go:276] No container was found matching "kube-apiserver"
I0728 21:01:43.951345 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0728 21:01:43.951406 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0728 21:01:43.982228 160802 cri.go:87] found id: ""
I0728 21:01:43.982264 160802 logs.go:274] 0 containers: []
W0728 21:01:43.982274 160802 logs.go:276] No container was found matching "etcd"
I0728 21:01:43.982284 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0728 21:01:43.982350 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0728 21:01:44.018496 160802 cri.go:87] found id: ""
I0728 21:01:44.018528 160802 logs.go:274] 0 containers: []
W0728 21:01:44.018538 160802 logs.go:276] No container was found matching "coredns"
I0728 21:01:44.018547 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0728 21:01:44.018613 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0728 21:01:44.049759 160802 cri.go:87] found id: ""
I0728 21:01:44.049796 160802 logs.go:274] 0 containers: []
W0728 21:01:44.049805 160802 logs.go:276] No container was found matching "kube-scheduler"
I0728 21:01:44.049815 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0728 21:01:44.049875 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0728 21:01:44.081950 160802 cri.go:87] found id: ""
I0728 21:01:44.081983 160802 logs.go:274] 0 containers: []
W0728 21:01:44.081992 160802 logs.go:276] No container was found matching "kube-proxy"
I0728 21:01:44.082000 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0728 21:01:44.082063 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0728 21:01:44.108825 160802 cri.go:87] found id: ""
I0728 21:01:44.108858 160802 logs.go:274] 0 containers: []
W0728 21:01:44.108872 160802 logs.go:276] No container was found matching "kubernetes-dashboard"
I0728 21:01:44.108881 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0728 21:01:44.108929 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0728 21:01:44.139774 160802 cri.go:87] found id: ""
I0728 21:01:44.139798 160802 logs.go:274] 0 containers: []
W0728 21:01:44.139804 160802 logs.go:276] No container was found matching "storage-provisioner"
I0728 21:01:44.139816 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0728 21:01:44.139879 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0728 21:01:44.166610 160802 cri.go:87] found id: ""
I0728 21:01:44.166635 160802 logs.go:274] 0 containers: []
W0728 21:01:44.166642 160802 logs.go:276] No container was found matching "kube-controller-manager"
I0728 21:01:44.166651 160802 logs.go:123] Gathering logs for kubelet ...
I0728 21:01:44.166664 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0728 21:01:44.215147 160802 logs.go:138] Found kubelet problem: Jul 28 21:01:43 kubernetes-upgrade-20220728205630-9812 kubelet[6488]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:01:44.262161 160802 logs.go:123] Gathering logs for dmesg ...
I0728 21:01:44.262206 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0728 21:01:44.279177 160802 logs.go:123] Gathering logs for describe nodes ...
I0728 21:01:44.279226 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0728 21:01:44.336817 160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0728 21:01:44.336847 160802 logs.go:123] Gathering logs for containerd ...
I0728 21:01:44.336859 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0728 21:01:44.374149 160802 logs.go:123] Gathering logs for container status ...
I0728 21:01:44.374193 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0728 21:01:44.403890 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:01:44.403916 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0728 21:01:44.404023 160802 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0728 21:01:44.404037 160802 out.go:239] Jul 28 21:01:43 kubernetes-upgrade-20220728205630-9812 kubelet[6488]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 28 21:01:43 kubernetes-upgrade-20220728205630-9812 kubelet[6488]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:01:44.404041 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:01:44.404046 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 21:01:54.405409 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 21:01:54.419902 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0728 21:01:54.419988 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0728 21:01:54.448684 160802 cri.go:87] found id: ""
I0728 21:01:54.448710 160802 logs.go:274] 0 containers: []
W0728 21:01:54.448719 160802 logs.go:276] No container was found matching "kube-apiserver"
I0728 21:01:54.448728 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0728 21:01:54.448794 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0728 21:01:54.477290 160802 cri.go:87] found id: ""
I0728 21:01:54.477326 160802 logs.go:274] 0 containers: []
W0728 21:01:54.477335 160802 logs.go:276] No container was found matching "etcd"
I0728 21:01:54.477343 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0728 21:01:54.477400 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0728 21:01:54.503660 160802 cri.go:87] found id: ""
I0728 21:01:54.503689 160802 logs.go:274] 0 containers: []
W0728 21:01:54.503698 160802 logs.go:276] No container was found matching "coredns"
I0728 21:01:54.503707 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0728 21:01:54.503755 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0728 21:01:54.530117 160802 cri.go:87] found id: ""
I0728 21:01:54.530143 160802 logs.go:274] 0 containers: []
W0728 21:01:54.530152 160802 logs.go:276] No container was found matching "kube-scheduler"
I0728 21:01:54.530162 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0728 21:01:54.530216 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0728 21:01:54.557645 160802 cri.go:87] found id: ""
I0728 21:01:54.557683 160802 logs.go:274] 0 containers: []
W0728 21:01:54.557694 160802 logs.go:276] No container was found matching "kube-proxy"
I0728 21:01:54.557703 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0728 21:01:54.557766 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0728 21:01:54.584750 160802 cri.go:87] found id: ""
I0728 21:01:54.584777 160802 logs.go:274] 0 containers: []
W0728 21:01:54.584784 160802 logs.go:276] No container was found matching "kubernetes-dashboard"
I0728 21:01:54.584790 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0728 21:01:54.584837 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0728 21:01:54.611538 160802 cri.go:87] found id: ""
I0728 21:01:54.611567 160802 logs.go:274] 0 containers: []
W0728 21:01:54.611574 160802 logs.go:276] No container was found matching "storage-provisioner"
I0728 21:01:54.611582 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0728 21:01:54.611642 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0728 21:01:54.639297 160802 cri.go:87] found id: ""
I0728 21:01:54.639331 160802 logs.go:274] 0 containers: []
W0728 21:01:54.639337 160802 logs.go:276] No container was found matching "kube-controller-manager"
I0728 21:01:54.639347 160802 logs.go:123] Gathering logs for kubelet ...
I0728 21:01:54.639358 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0728 21:01:54.691418 160802 logs.go:138] Found kubelet problem: Jul 28 21:01:54 kubernetes-upgrade-20220728205630-9812 kubelet[6790]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:01:54.737146 160802 logs.go:123] Gathering logs for dmesg ...
I0728 21:01:54.737189 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0728 21:01:54.755335 160802 logs.go:123] Gathering logs for describe nodes ...
I0728 21:01:54.755382 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0728 21:01:54.811257 160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0728 21:01:54.811310 160802 logs.go:123] Gathering logs for containerd ...
I0728 21:01:54.811327 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0728 21:01:54.848751 160802 logs.go:123] Gathering logs for container status ...
I0728 21:01:54.848804 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0728 21:01:54.880228 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:01:54.880254 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0728 21:01:54.880373 160802 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0728 21:01:54.880385 160802 out.go:239] Jul 28 21:01:54 kubernetes-upgrade-20220728205630-9812 kubelet[6790]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 28 21:01:54 kubernetes-upgrade-20220728205630-9812 kubelet[6790]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:01:54.880391 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:01:54.880413 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 21:02:04.882058 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 21:02:04.920153 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0728 21:02:04.920257 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0728 21:02:04.949005 160802 cri.go:87] found id: ""
I0728 21:02:04.949044 160802 logs.go:274] 0 containers: []
W0728 21:02:04.949052 160802 logs.go:276] No container was found matching "kube-apiserver"
I0728 21:02:04.949063 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0728 21:02:04.949126 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0728 21:02:04.977644 160802 cri.go:87] found id: ""
I0728 21:02:04.977674 160802 logs.go:274] 0 containers: []
W0728 21:02:04.977683 160802 logs.go:276] No container was found matching "etcd"
I0728 21:02:04.977690 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0728 21:02:04.977755 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0728 21:02:05.004869 160802 cri.go:87] found id: ""
I0728 21:02:05.004900 160802 logs.go:274] 0 containers: []
W0728 21:02:05.004910 160802 logs.go:276] No container was found matching "coredns"
I0728 21:02:05.004919 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0728 21:02:05.004978 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0728 21:02:05.031209 160802 cri.go:87] found id: ""
I0728 21:02:05.031236 160802 logs.go:274] 0 containers: []
W0728 21:02:05.031243 160802 logs.go:276] No container was found matching "kube-scheduler"
I0728 21:02:05.031250 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0728 21:02:05.031297 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0728 21:02:05.058559 160802 cri.go:87] found id: ""
I0728 21:02:05.058587 160802 logs.go:274] 0 containers: []
W0728 21:02:05.058593 160802 logs.go:276] No container was found matching "kube-proxy"
I0728 21:02:05.058600 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0728 21:02:05.058665 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0728 21:02:05.087349 160802 cri.go:87] found id: ""
I0728 21:02:05.087374 160802 logs.go:274] 0 containers: []
W0728 21:02:05.087381 160802 logs.go:276] No container was found matching "kubernetes-dashboard"
I0728 21:02:05.087389 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0728 21:02:05.087446 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0728 21:02:05.115776 160802 cri.go:87] found id: ""
I0728 21:02:05.115801 160802 logs.go:274] 0 containers: []
W0728 21:02:05.115807 160802 logs.go:276] No container was found matching "storage-provisioner"
I0728 21:02:05.115813 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0728 21:02:05.115870 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0728 21:02:05.144255 160802 cri.go:87] found id: ""
I0728 21:02:05.144282 160802 logs.go:274] 0 containers: []
W0728 21:02:05.144290 160802 logs.go:276] No container was found matching "kube-controller-manager"
I0728 21:02:05.144301 160802 logs.go:123] Gathering logs for kubelet ...
I0728 21:02:05.144328 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0728 21:02:05.196560 160802 logs.go:138] Found kubelet problem: Jul 28 21:02:04 kubernetes-upgrade-20220728205630-9812 kubelet[7087]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:02:05.242778 160802 logs.go:123] Gathering logs for dmesg ...
I0728 21:02:05.242819 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0728 21:02:05.259370 160802 logs.go:123] Gathering logs for describe nodes ...
I0728 21:02:05.259415 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0728 21:02:05.318270 160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0728 21:02:05.318303 160802 logs.go:123] Gathering logs for containerd ...
I0728 21:02:05.318318 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0728 21:02:05.355993 160802 logs.go:123] Gathering logs for container status ...
I0728 21:02:05.356036 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0728 21:02:05.385546 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:02:05.385571 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0728 21:02:05.385667 160802 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0728 21:02:05.385679 160802 out.go:239] Jul 28 21:02:04 kubernetes-upgrade-20220728205630-9812 kubelet[7087]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 28 21:02:04 kubernetes-upgrade-20220728205630-9812 kubelet[7087]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:02:05.385684 160802 out.go:309] Setting ErrFile to fd 2...
I0728 21:02:05.385689 160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 21:02:15.386914 160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 21:02:15.398574 160802 kubeadm.go:630] restartCluster took 4m1.997922421s
W0728 21:02:15.398737 160802 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
I0728 21:02:15.398807 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0728 21:02:16.170735 160802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0728 21:02:16.183498 160802 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0728 21:02:16.193101 160802 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0728 21:02:16.193186 160802 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0728 21:02:16.201876 160802 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0728 21:02:16.201929 160802 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0728 21:04:12.721082 160802 out.go:204] - Generating certificates and keys ...
I0728 21:04:12.724181 160802 out.go:204] - Booting up control plane ...
W0728 21:04:12.726717 160802 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1013-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0728 21:02:16.240575 7625 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1013-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0728 21:02:16.240575 7625 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0728 21:04:12.726772 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0728 21:04:13.465967 160802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0728 21:04:13.478145 160802 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0728 21:04:13.478204 160802 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0728 21:04:13.486471 160802 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0728 21:04:13.486522 160802 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0728 21:06:09.456081 160802 out.go:204] - Generating certificates and keys ...
I0728 21:06:09.460704 160802 out.go:204] - Booting up control plane ...
I0728 21:06:09.463187 160802 kubeadm.go:397] StartCluster complete in 7m56.100147537s
I0728 21:06:09.463242 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0728 21:06:09.463303 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0728 21:06:09.492218 160802 cri.go:87] found id: ""
I0728 21:06:09.492244 160802 logs.go:274] 0 containers: []
W0728 21:06:09.492250 160802 logs.go:276] No container was found matching "kube-apiserver"
I0728 21:06:09.492257 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0728 21:06:09.492327 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0728 21:06:09.519747 160802 cri.go:87] found id: ""
I0728 21:06:09.519773 160802 logs.go:274] 0 containers: []
W0728 21:06:09.519779 160802 logs.go:276] No container was found matching "etcd"
I0728 21:06:09.519786 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0728 21:06:09.519843 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0728 21:06:09.546296 160802 cri.go:87] found id: ""
I0728 21:06:09.546331 160802 logs.go:274] 0 containers: []
W0728 21:06:09.546340 160802 logs.go:276] No container was found matching "coredns"
I0728 21:06:09.546348 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0728 21:06:09.546505 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0728 21:06:09.574600 160802 cri.go:87] found id: ""
I0728 21:06:09.574627 160802 logs.go:274] 0 containers: []
W0728 21:06:09.574634 160802 logs.go:276] No container was found matching "kube-scheduler"
I0728 21:06:09.574640 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0728 21:06:09.574701 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0728 21:06:09.604664 160802 cri.go:87] found id: ""
I0728 21:06:09.604694 160802 logs.go:274] 0 containers: []
W0728 21:06:09.604700 160802 logs.go:276] No container was found matching "kube-proxy"
I0728 21:06:09.604708 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0728 21:06:09.604798 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0728 21:06:09.634288 160802 cri.go:87] found id: ""
I0728 21:06:09.634320 160802 logs.go:274] 0 containers: []
W0728 21:06:09.634329 160802 logs.go:276] No container was found matching "kubernetes-dashboard"
I0728 21:06:09.634339 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0728 21:06:09.634400 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0728 21:06:09.666085 160802 cri.go:87] found id: ""
I0728 21:06:09.666116 160802 logs.go:274] 0 containers: []
W0728 21:06:09.666123 160802 logs.go:276] No container was found matching "storage-provisioner"
I0728 21:06:09.666130 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0728 21:06:09.666186 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0728 21:06:09.697616 160802 cri.go:87] found id: ""
I0728 21:06:09.697646 160802 logs.go:274] 0 containers: []
W0728 21:06:09.697656 160802 logs.go:276] No container was found matching "kube-controller-manager"
I0728 21:06:09.697671 160802 logs.go:123] Gathering logs for dmesg ...
I0728 21:06:09.697688 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0728 21:06:09.715231 160802 logs.go:123] Gathering logs for describe nodes ...
I0728 21:06:09.715278 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0728 21:06:09.774303 160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0728 21:06:09.774333 160802 logs.go:123] Gathering logs for containerd ...
I0728 21:06:09.774345 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0728 21:06:09.822586 160802 logs.go:123] Gathering logs for container status ...
I0728 21:06:09.822641 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0728 21:06:09.856873 160802 logs.go:123] Gathering logs for kubelet ...
I0728 21:06:09.856900 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0728 21:06:09.905506 160802 logs.go:138] Found kubelet problem: Jul 28 21:06:09 kubernetes-upgrade-20220728205630-9812 kubelet[11608]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
W0728 21:06:09.966501 160802 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1013-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0728 21:04:13.523562 9731 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0728 21:06:09.966572 160802 out.go:239] *
*
W0728 21:06:09.966810 160802 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1013-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0728 21:04:13.523562 9731 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1013-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0728 21:04:13.523562 9731 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0728 21:06:09.966848 160802 out.go:239] *
*
W0728 21:06:09.967728 160802 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0728 21:06:09.971858 160802 out.go:177] X Problems detected in kubelet:
I0728 21:06:09.973883 160802 out.go:177] Jul 28 21:06:09 kubernetes-upgrade-20220728205630-9812 kubelet[11608]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:06:09.978588 160802 out.go:177]
W0728 21:06:09.981692 160802 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1013-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0728 21:04:13.523562 9731 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1013-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0728 21:04:13.523562 9731 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0728 21:06:09.981884 160802 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0728 21:06:09.981956 160802 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0728 21:06:09.986376 160802 out.go:177]
** /stderr **
version_upgrade_test.go:252: failed to upgrade with newest k8s version. args: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220728205630-9812 --memory=2200 --kubernetes-version=v1.24.3 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd : exit status 109
version_upgrade_test.go:255: (dbg) Run: kubectl --context kubernetes-upgrade-20220728205630-9812 version --output=json
version_upgrade_test.go:255: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20220728205630-9812 version --output=json: exit status 1 (58.834661ms)
-- stdout --
{
"clientVersion": {
"major": "1",
"minor": "24",
"gitVersion": "v1.24.3",
"gitCommit": "aef86a93758dc3cb2c658dd9657ab4ad4afc21cb",
"gitTreeState": "clean",
"buildDate": "2022-07-13T14:30:46Z",
"goVersion": "go1.18.3",
"compiler": "gc",
"platform": "linux/amd64"
},
"kustomizeVersion": "v4.5.4"
}
-- /stdout --
** stderr **
The connection to the server 192.168.67.2:8443 was refused - did you specify the right host or port?
** /stderr **
version_upgrade_test.go:257: error running kubectl: exit status 1
panic.go:482: *** TestKubernetesUpgrade FAILED at 2022-07-28 21:06:10.227428285 +0000 UTC m=+2357.292310965
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect kubernetes-upgrade-20220728205630-9812
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-20220728205630-9812:
-- stdout --
[
{
"Id": "157d91e6166012921e611704b4a2055b61d55a4321adf28c987469c33622c1b6",
"Created": "2022-07-28T20:56:44.053513528Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 161192,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-07-28T20:57:23.967962968Z",
"FinishedAt": "2022-07-28T20:57:22.081850941Z"
},
"Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
"ResolvConfPath": "/var/lib/docker/containers/157d91e6166012921e611704b4a2055b61d55a4321adf28c987469c33622c1b6/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/157d91e6166012921e611704b4a2055b61d55a4321adf28c987469c33622c1b6/hostname",
"HostsPath": "/var/lib/docker/containers/157d91e6166012921e611704b4a2055b61d55a4321adf28c987469c33622c1b6/hosts",
"LogPath": "/var/lib/docker/containers/157d91e6166012921e611704b4a2055b61d55a4321adf28c987469c33622c1b6/157d91e6166012921e611704b4a2055b61d55a4321adf28c987469c33622c1b6-json.log",
"Name": "/kubernetes-upgrade-20220728205630-9812",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"kubernetes-upgrade-20220728205630-9812:/var",
"/lib/modules:/lib/modules:ro"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "kubernetes-upgrade-20220728205630-9812",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/98cd9b4af8297a0f79b7836844d484361b0e513a818a7af9acc180c4cff6a59f-init/diff:/var/lib/docker/overlay2/159b55a9ed0c6f628a057cdb04dda02bba30a3b641518455957c4aad71210e5b/diff:/var/lib/docker/overlay2/10f339ea6c6d2bc20e1c3984c10e314bdfacbf16c4f1fc81508a8af53618e0c2/diff:/var/lib/docker/overlay2/f70c8501f8e2ab7eb8cf3713b8965df8ff0eabb54c03470a2ca63b07e9f8aa54/diff:/var/lib/docker/overlay2/c612b2534e2fc8952f8d55d6769698d0ad07b4f70868569fe73c07e709eb41c4/diff:/var/lib/docker/overlay2/b6082008a01e842766036ffbc69caf78f0bf4a848cca7f47ab699d89da8d1da0/diff:/var/lib/docker/overlay2/d75e86c9871d888af33c32588e829032e7d8df43d915295856b1bd632a8aec40/diff:/var/lib/docker/overlay2/c8146bc91d30e444bdb037c8984a79eba689a0e9f4c6ee8a1a2f087ead11cdee/diff:/var/lib/docker/overlay2/43aff643ab52dd0f1901cc30e46fade6289c38619ec98e77b3c0b9ae3b5ceff6/diff:/var/lib/docker/overlay2/4e0dd980aab46effffa7b2d0a19ff6a1d9de94c97cca5150e7245a47dd82d395/diff:/var/lib/docker/overlay2/a54660
4a6462e894bf570401a57ff4d82983194620ba59e926b7aae262e19e3b/diff:/var/lib/docker/overlay2/94097ff7af073076335b40eb2a01a53ac0fc19c248baf8daa67146b23da2fd7d/diff:/var/lib/docker/overlay2/2e81cf4170d8d47655e7f008e7f79c639ca56a4fa6b48b83eada8753144434f0/diff:/var/lib/docker/overlay2/ad12959134154f9796289bec856cc02b54fc2c9b3de5bfa4626bde685b62714a/diff:/var/lib/docker/overlay2/4d208f59b6ce5776ae3295e11d01acfa1908bbdff6cb9fda882ea8995aee2cb0/diff:/var/lib/docker/overlay2/f9e0539a853b02e93c2741a252361d5b6cc4ecc7e2098a1f9f6f8f06ef8af675/diff:/var/lib/docker/overlay2/9aafc677aea247a7aa7f21124a1d04e79e334bba6950604f0d9c56330f782239/diff:/var/lib/docker/overlay2/0d5358399b9dcb842d3e9f481695ffca49e2ead49bbd6f11c30e71a845833876/diff:/var/lib/docker/overlay2/6443a29a568b98bbf23e8cbe82a92ae50c3e69955de693c5ec049c84c83c2578/diff:/var/lib/docker/overlay2/930c7197f21b625ccb4c9330154d41beedf2f4dbce826a605e0b175dc9db6fb4/diff:/var/lib/docker/overlay2/e467ccfb37dceaab87301a1bc1fc8424d242e4fd901cde56b24c07668d1d47d1/diff:/var/lib/d
ocker/overlay2/9d64a9063c318a598d6f650409543bb20699d834b19aca837a0fd2e4785de7a7/diff:/var/lib/docker/overlay2/730748e1888d7eda2cf74610d227e0d7a5e969a95d87795513ae2b65d4bf0d37/diff:/var/lib/docker/overlay2/d15656d583a4f3a64cd65d8b266888d55da8b978e95cd0dedb81984b17547a8e/diff:/var/lib/docker/overlay2/1075d687c8048b9e07d64f9229e6c6fe189eb1d89e59fbc320f6be7f29f3dcf3/diff:/var/lib/docker/overlay2/70d6a8817e1e919d589fd69f67161bb4dac16836849b3b35b26cf48214f62cf6/diff:/var/lib/docker/overlay2/8e8e13f68b04eaae4a9c67194a27c687ed31816a5aa7bbe43aefb7885ab49cfd/diff:/var/lib/docker/overlay2/a5d29889159bc71a7e53f3275846c3a205f4afbf8707facf2cf88163af181ea6/diff:/var/lib/docker/overlay2/18cb8da85b40492f06576cb149164681c9b88cc6d83a7b73074f93afd2d326d0/diff:/var/lib/docker/overlay2/cd1ceb3894d2dc694ca5f4d57fb937f12a471c4861e72eb758c8d99ae15ec8e7/diff:/var/lib/docker/overlay2/e192c90b77e5017fb4d32a36c6118403b5ce78981718b9ae597795a57dd8967e/diff:/var/lib/docker/overlay2/cb277b4bd13e414771896c6e520c18a1de8c252ad1c2dc11f05a8b57018
bdf08/diff:/var/lib/docker/overlay2/e8f50e7d98e92ecd9a2465d95ce41953a7acd8958f4a599e837bb9bbbfaa72dc/diff:/var/lib/docker/overlay2/7f8089a9db64a7a0b1637dd394b4f2e4b9886ab7478b5972c3d3b8addec08c69/diff:/var/lib/docker/overlay2/6e552fc578751df4db559a20753abdb8d0bb057b992f1c6034a84a0a63e169ae/diff:/var/lib/docker/overlay2/0634299a7052fc637709266585f1982b3bf26fcef8a0fbb11fa9b1d17b578e35/diff:/var/lib/docker/overlay2/07b5dc86d77874519d1e86517fc1c8cdc6809da1a5ceaa0283ed6bc573ecc0ba/diff:/var/lib/docker/overlay2/06f82e7047fa2ecb3d75b421c09c07633f4324121e1f8e4158cf97e9172f97a9/diff:/var/lib/docker/overlay2/33882ff0de530162078f03ec586455ae28e3d9e957265ccf6de389ab70269be4/diff:/var/lib/docker/overlay2/13a232f6a032f7a2122ecb4c4954c1d7427d99358c129109496d92edac19aa4d/diff:/var/lib/docker/overlay2/3f28515f67d2fb23a544c48310684f66ccd4a2d4894b75858e9750adb53d7d1f/diff:/var/lib/docker/overlay2/a58c936c6b19de4261612f61347279308363f3918c1b7585e4c8425e69c6e89f/diff",
"MergedDir": "/var/lib/docker/overlay2/98cd9b4af8297a0f79b7836844d484361b0e513a818a7af9acc180c4cff6a59f/merged",
"UpperDir": "/var/lib/docker/overlay2/98cd9b4af8297a0f79b7836844d484361b0e513a818a7af9acc180c4cff6a59f/diff",
"WorkDir": "/var/lib/docker/overlay2/98cd9b4af8297a0f79b7836844d484361b0e513a818a7af9acc180c4cff6a59f/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "kubernetes-upgrade-20220728205630-9812",
"Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220728205630-9812/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "kubernetes-upgrade-20220728205630-9812",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220728205630-9812",
"name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220728205630-9812",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "638fa93f7d0ca5187a1fc034140628cf07afb9be6a1c298481d12962389ccb3f",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49337"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49336"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49333"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49335"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49334"
}
]
},
"SandboxKey": "/var/run/docker/netns/638fa93f7d0c",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"kubernetes-upgrade-20220728205630-9812": {
"IPAMConfig": {
"IPv4Address": "192.168.67.2"
},
"Links": null,
"Aliases": [
"157d91e61660",
"kubernetes-upgrade-20220728205630-9812"
],
"NetworkID": "c898e4ca6805eab63bab8736fbb6bce03c0f9e3a222a941d6daa6694d9e2e9ad",
"EndpointID": "30990745a4063b9672230f312ad87259d4a2e5fb4469d0935ae4ece8090a27a5",
"Gateway": "192.168.67.1",
"IPAddress": "192.168.67.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:43:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220728205630-9812 -n kubernetes-upgrade-20220728205630-9812
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220728205630-9812 -n kubernetes-upgrade-20220728205630-9812: exit status 2 (499.711235ms)
-- stdout --
Running
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p kubernetes-upgrade-20220728205630-9812 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-20220728205630-9812 logs -n 25: (1.087179183s)
helpers_test.go:252: TestKubernetesUpgrade logs:
-- stdout --
*
* ==> Audit <==
* |---------|---------------------------------------------------|------------------------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------------------|------------------------------------------------|---------|---------|---------------------|---------------------|
| profile | list --output json | minikube | jenkins | v1.26.0 | 28 Jul 22 20:58 UTC | 28 Jul 22 20:58 UTC |
| delete | -p pause-20220728205731-9812 | pause-20220728205731-9812 | jenkins | v1.26.0 | 28 Jul 22 20:58 UTC | 28 Jul 22 20:59 UTC |
| start | -p | force-systemd-flag-20220728205900-9812 | jenkins | v1.26.0 | 28 Jul 22 20:59 UTC | 28 Jul 22 20:59 UTC |
| | force-systemd-flag-20220728205900-9812 | | | | | |
| | --memory=2048 --force-systemd | | | | | |
| | --alsologtostderr -v=5 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-20220728205835-9812 | cert-options-20220728205835-9812 | jenkins | v1.26.0 | 28 Jul 22 20:59 UTC | 28 Jul 22 20:59 UTC |
| | ssh openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p | cert-options-20220728205835-9812 | jenkins | v1.26.0 | 28 Jul 22 20:59 UTC | 28 Jul 22 20:59 UTC |
| | cert-options-20220728205835-9812 | | | | | |
| | -- sudo cat | | | | | |
| | /etc/kubernetes/admin.conf | | | | | |
| delete | -p | cert-options-20220728205835-9812 | jenkins | v1.26.0 | 28 Jul 22 20:59 UTC | 28 Jul 22 20:59 UTC |
| | cert-options-20220728205835-9812 | | | | | |
| start | -p | old-k8s-version-20220728205919-9812 | jenkins | v1.26.0 | 28 Jul 22 20:59 UTC | 28 Jul 22 21:01 UTC |
| | old-k8s-version-20220728205919-9812 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | --wait=true --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| ssh | force-systemd-flag-20220728205900-9812 | force-systemd-flag-20220728205900-9812 | jenkins | v1.26.0 | 28 Jul 22 20:59 UTC | 28 Jul 22 20:59 UTC |
| | ssh cat /etc/containerd/config.toml | | | | | |
| delete | -p | force-systemd-flag-20220728205900-9812 | jenkins | v1.26.0 | 28 Jul 22 20:59 UTC | 28 Jul 22 20:59 UTC |
| | force-systemd-flag-20220728205900-9812 | | | | | |
| start | -p | no-preload-20220728205940-9812 | jenkins | v1.26.0 | 28 Jul 22 20:59 UTC | 28 Jul 22 21:00 UTC |
| | no-preload-20220728205940-9812 | | | | | |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.3 | | | | | |
| addons | enable metrics-server -p | no-preload-20220728205940-9812 | jenkins | v1.26.0 | 28 Jul 22 21:00 UTC | 28 Jul 22 21:00 UTC |
| | no-preload-20220728205940-9812 | | | | | |
| | --images=MetricsServer=k8s.gcr.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p | no-preload-20220728205940-9812 | jenkins | v1.26.0 | 28 Jul 22 21:00 UTC | 28 Jul 22 21:01 UTC |
| | no-preload-20220728205940-9812 | | | | | |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p | no-preload-20220728205940-9812 | jenkins | v1.26.0 | 28 Jul 22 21:01 UTC | 28 Jul 22 21:01 UTC |
| | no-preload-20220728205940-9812 | | | | | |
| | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 | | | | | |
| start | -p | no-preload-20220728205940-9812 | jenkins | v1.26.0 | 28 Jul 22 21:01 UTC | |
| | no-preload-20220728205940-9812 | | | | | |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.3 | | | | | |
| addons | enable metrics-server -p | old-k8s-version-20220728205919-9812 | jenkins | v1.26.0 | 28 Jul 22 21:01 UTC | 28 Jul 22 21:01 UTC |
| | old-k8s-version-20220728205919-9812 | | | | | |
| | --images=MetricsServer=k8s.gcr.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p | old-k8s-version-20220728205919-9812 | jenkins | v1.26.0 | 28 Jul 22 21:01 UTC | 28 Jul 22 21:01 UTC |
| | old-k8s-version-20220728205919-9812 | | | | | |
| | --alsologtostderr -v=3 | | | | | |
| start | -p | cert-expiration-20220728205827-9812 | jenkins | v1.26.0 | 28 Jul 22 21:01 UTC | 28 Jul 22 21:02 UTC |
| | cert-expiration-20220728205827-9812 | | | | | |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| addons | enable dashboard -p | old-k8s-version-20220728205919-9812 | jenkins | v1.26.0 | 28 Jul 22 21:01 UTC | 28 Jul 22 21:01 UTC |
| | old-k8s-version-20220728205919-9812 | | | | | |
| | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 | | | | | |
| start | -p | old-k8s-version-20220728205919-9812 | jenkins | v1.26.0 | 28 Jul 22 21:01 UTC | |
| | old-k8s-version-20220728205919-9812 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | --wait=true --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| delete | -p | cert-expiration-20220728205827-9812 | jenkins | v1.26.0 | 28 Jul 22 21:02 UTC | 28 Jul 22 21:02 UTC |
| | cert-expiration-20220728205827-9812 | | | | | |
| start | -p | default-k8s-different-port-20220728210213-9812 | jenkins | v1.26.0 | 28 Jul 22 21:02 UTC | 28 Jul 22 21:03 UTC |
| | default-k8s-different-port-20220728210213-9812 | | | | | |
| | --memory=2200 --alsologtostderr --wait=true | | | | | |
| | --apiserver-port=8444 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.3 | | | | | |
| addons | enable metrics-server -p | default-k8s-different-port-20220728210213-9812 | jenkins | v1.26.0 | 28 Jul 22 21:03 UTC | 28 Jul 22 21:03 UTC |
| | default-k8s-different-port-20220728210213-9812 | | | | | |
| | --images=MetricsServer=k8s.gcr.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p | default-k8s-different-port-20220728210213-9812 | jenkins | v1.26.0 | 28 Jul 22 21:03 UTC | 28 Jul 22 21:03 UTC |
| | default-k8s-different-port-20220728210213-9812 | | | | | |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p | default-k8s-different-port-20220728210213-9812 | jenkins | v1.26.0 | 28 Jul 22 21:03 UTC | 28 Jul 22 21:03 UTC |
| | default-k8s-different-port-20220728210213-9812 | | | | | |
| | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 | | | | | |
| start | -p | default-k8s-different-port-20220728210213-9812 | jenkins | v1.26.0 | 28 Jul 22 21:03 UTC | |
| | default-k8s-different-port-20220728210213-9812 | | | | | |
| | --memory=2200 --alsologtostderr --wait=true | | | | | |
| | --apiserver-port=8444 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.3 | | | | | |
|---------|---------------------------------------------------|------------------------------------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/07/28 21:03:42
Running on machine: ubuntu-20-agent-8
Binary: Built with gc go1.18.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0728 21:03:42.611768 212382 out.go:296] Setting OutFile to fd 1 ...
I0728 21:03:42.611935 212382 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 21:03:42.611945 212382 out.go:309] Setting ErrFile to fd 2...
I0728 21:03:42.611957 212382 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 21:03:42.612121 212382 root.go:332] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
I0728 21:03:42.612829 212382 out.go:303] Setting JSON to false
I0728 21:03:42.614911 212382 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2773,"bootTime":1659039450,"procs":949,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1013-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0728 21:03:42.615000 212382 start.go:125] virtualization: kvm guest
I0728 21:03:42.617804 212382 out.go:177] * [default-k8s-different-port-20220728210213-9812] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
I0728 21:03:42.619408 212382 out.go:177] - MINIKUBE_LOCATION=14555
I0728 21:03:42.619334 212382 notify.go:193] Checking for updates...
I0728 21:03:42.622212 212382 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0728 21:03:42.624137 212382 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
I0728 21:03:42.625777 212382 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
I0728 21:03:42.627238 212382 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0728 21:03:42.629353 212382 config.go:178] Loaded profile config "default-k8s-different-port-20220728210213-9812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
I0728 21:03:42.629909 212382 driver.go:365] Setting default libvirt URI to qemu:///system
I0728 21:03:42.681814 212382 docker.go:137] docker version: linux-20.10.17
I0728 21:03:42.681924 212382 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0728 21:03:42.801064 212382 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:57 SystemTime:2022-07-28 21:03:42.714639784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0728 21:03:42.801197 212382 docker.go:254] overlay module found
I0728 21:03:42.803527 212382 out.go:177] * Using the docker driver based on existing profile
I0728 21:03:39.133295 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:03:41.133967 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:03:42.804924 212382 start.go:284] selected driver: docker
I0728 21:03:42.804944 212382 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220728210213-9812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-
20220728210213-9812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system
_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0728 21:03:42.805125 212382 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0728 21:03:42.806371 212382 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0728 21:03:42.927884 212382 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:57 SystemTime:2022-07-28 21:03:42.841781106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0728 21:03:42.928205 212382 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0728 21:03:42.928263 212382 cni.go:95] Creating CNI manager for ""
I0728 21:03:42.928280 212382 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0728 21:03:42.928309 212382 start_flags.go:310] config:
{Name:default-k8s-different-port-20220728210213-9812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220728210213-9812 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddr
ess: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0728 21:03:42.930940 212382 out.go:177] * Starting control plane node default-k8s-different-port-20220728210213-9812 in cluster default-k8s-different-port-20220728210213-9812
I0728 21:03:42.932288 212382 cache.go:120] Beginning downloading kic base image for docker with containerd
I0728 21:03:42.933583 212382 out.go:177] * Pulling base image ...
I0728 21:03:42.934816 212382 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime containerd
I0728 21:03:42.934907 212382 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4
I0728 21:03:42.934927 212382 cache.go:57] Caching tarball of preloaded images
I0728 21:03:42.934935 212382 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
I0728 21:03:42.935187 212382 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0728 21:03:42.935203 212382 cache.go:60] Finished verifying existence of preloaded tar for v1.24.3 on containerd
I0728 21:03:42.935390 212382 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/config.json ...
I0728 21:03:42.974508 212382 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
I0728 21:03:42.974540 212382 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
I0728 21:03:42.974555 212382 cache.go:208] Successfully downloaded all kic artifacts
I0728 21:03:42.974615 212382 start.go:370] acquiring machines lock for default-k8s-different-port-20220728210213-9812: {Name:mkab6f862bec008fcda0a5dd067bb9f92e1c3d5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0728 21:03:42.974723 212382 start.go:374] acquired machines lock for "default-k8s-different-port-20220728210213-9812" in 83.295µs
I0728 21:03:42.974746 212382 start.go:95] Skipping create...Using existing machine configuration
I0728 21:03:42.974756 212382 fix.go:55] fixHost starting:
I0728 21:03:42.975078 212382 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728210213-9812 --format={{.State.Status}}
I0728 21:03:43.011861 212382 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220728210213-9812: state=Stopped err=<nil>
W0728 21:03:43.011896 212382 fix.go:129] unexpected machine state, will restart: <nil>
I0728 21:03:43.014312 212382 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220728210213-9812" ...
I0728 21:03:42.335465 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:03:44.834531 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:03:43.015733 212382 cli_runner.go:164] Run: docker start default-k8s-different-port-20220728210213-9812
I0728 21:03:43.449285 212382 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728210213-9812 --format={{.State.Status}}
I0728 21:03:43.490379 212382 kic.go:415] container "default-k8s-different-port-20220728210213-9812" state is running.
I0728 21:03:43.490927 212382 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220728210213-9812
I0728 21:03:43.529100 212382 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/config.json ...
I0728 21:03:43.529436 212382 machine.go:88] provisioning docker machine ...
I0728 21:03:43.529479 212382 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220728210213-9812"
I0728 21:03:43.529539 212382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728210213-9812
I0728 21:03:43.570669 212382 main.go:134] libmachine: Using SSH client type: native
I0728 21:03:43.570942 212382 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7daec0] 0x7ddf20 <nil> [] 0s} 127.0.0.1 49392 <nil> <nil>}
I0728 21:03:43.570973 212382 main.go:134] libmachine: About to run SSH command:
sudo hostname default-k8s-different-port-20220728210213-9812 && echo "default-k8s-different-port-20220728210213-9812" | sudo tee /etc/hostname
I0728 21:03:43.571703 212382 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51926->127.0.0.1:49392: read: connection reset by peer
I0728 21:03:46.705061 212382 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220728210213-9812
I0728 21:03:46.705153 212382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728210213-9812
I0728 21:03:46.743391 212382 main.go:134] libmachine: Using SSH client type: native
I0728 21:03:46.743545 212382 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7daec0] 0x7ddf20 <nil> [] 0s} 127.0.0.1 49392 <nil> <nil>}
I0728 21:03:46.743567 212382 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\sdefault-k8s-different-port-20220728210213-9812' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220728210213-9812/g' /etc/hosts;
else
echo '127.0.1.1 default-k8s-different-port-20220728210213-9812' | sudo tee -a /etc/hosts;
fi
fi
I0728 21:03:46.867081 212382 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0728 21:03:46.867123 212382 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
I0728 21:03:46.867147 212382 ubuntu.go:177] setting up certificates
I0728 21:03:46.867157 212382 provision.go:83] configureAuth start
I0728 21:03:46.867212 212382 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220728210213-9812
I0728 21:03:46.903997 212382 provision.go:138] copyHostCerts
I0728 21:03:46.904072 212382 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
I0728 21:03:46.904085 212382 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
I0728 21:03:46.904170 212382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1078 bytes)
I0728 21:03:46.904301 212382 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
I0728 21:03:46.904324 212382 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
I0728 21:03:46.904359 212382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
I0728 21:03:46.904452 212382 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
I0728 21:03:46.904466 212382 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
I0728 21:03:46.904504 212382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
I0728 21:03:46.904631 212382 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220728210213-9812 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220728210213-9812]
I0728 21:03:47.010831 212382 provision.go:172] copyRemoteCerts
I0728 21:03:47.010939 212382 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0728 21:03:47.010989 212382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728210213-9812
I0728 21:03:47.049207 212382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728210213-9812/id_rsa Username:docker}
I0728 21:03:47.143798 212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0728 21:03:47.163910 212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
I0728 21:03:47.184336 212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0728 21:03:47.206560 212382 provision.go:86] duration metric: configureAuth took 339.388807ms
I0728 21:03:47.206588 212382 ubuntu.go:193] setting minikube options for container-runtime
I0728 21:03:47.206755 212382 config.go:178] Loaded profile config "default-k8s-different-port-20220728210213-9812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
I0728 21:03:47.206766 212382 machine.go:91] provisioned docker machine in 3.67730656s
I0728 21:03:47.206773 212382 start.go:307] post-start starting for "default-k8s-different-port-20220728210213-9812" (driver="docker")
I0728 21:03:47.206780 212382 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0728 21:03:47.206816 212382 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0728 21:03:47.206855 212382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728210213-9812
I0728 21:03:47.245535 212382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728210213-9812/id_rsa Username:docker}
I0728 21:03:47.336155 212382 ssh_runner.go:195] Run: cat /etc/os-release
I0728 21:03:47.339133 212382 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0728 21:03:47.339159 212382 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0728 21:03:47.339168 212382 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0728 21:03:47.339173 212382 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0728 21:03:47.339182 212382 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
I0728 21:03:47.339233 212382 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
I0728 21:03:47.339313 212382 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98122.pem -> 98122.pem in /etc/ssl/certs
I0728 21:03:47.339405 212382 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0728 21:03:47.347120 212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98122.pem --> /etc/ssl/certs/98122.pem (1708 bytes)
I0728 21:03:47.367679 212382 start.go:310] post-start completed in 160.892278ms
I0728 21:03:47.367781 212382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0728 21:03:47.367819 212382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728210213-9812
I0728 21:03:47.406159 212382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728210213-9812/id_rsa Username:docker}
I0728 21:03:47.491717 212382 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0728 21:03:47.496208 212382 fix.go:57] fixHost completed within 4.521444392s
I0728 21:03:47.496245 212382 start.go:82] releasing machines lock for "default-k8s-different-port-20220728210213-9812", held for 4.521508456s
I0728 21:03:47.496338 212382 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220728210213-9812
I0728 21:03:47.533202 212382 ssh_runner.go:195] Run: systemctl --version
I0728 21:03:47.533258 212382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728210213-9812
I0728 21:03:47.533261 212382 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0728 21:03:47.533318 212382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728210213-9812
I0728 21:03:47.573312 212382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728210213-9812/id_rsa Username:docker}
I0728 21:03:47.573528 212382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728210213-9812/id_rsa Username:docker}
I0728 21:03:43.633079 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:03:46.133073 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:03:46.835006 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:03:48.835071 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:03:47.659568 212382 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0728 21:03:47.685550 212382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0728 21:03:47.696446 212382 docker.go:188] disabling docker service ...
I0728 21:03:47.696504 212382 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0728 21:03:47.707699 212382 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0728 21:03:47.718214 212382 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0728 21:03:47.800400 212382 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0728 21:03:47.892734 212382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0728 21:03:47.903649 212382 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0728 21:03:47.918235 212382 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
I0728 21:03:47.927669 212382 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
I0728 21:03:47.936838 212382 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
I0728 21:03:47.946170 212382 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
I0728 21:03:47.955346 212382 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
I0728 21:03:47.965255 212382 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
I0728 21:03:47.980845 212382 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0728 21:03:47.987996 212382 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0728 21:03:47.995582 212382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0728 21:03:48.074160 212382 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0728 21:03:48.153721 212382 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
I0728 21:03:48.153795 212382 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0728 21:03:48.158151 212382 start.go:471] Will wait 60s for crictl version
I0728 21:03:48.158219 212382 ssh_runner.go:195] Run: sudo crictl version
I0728 21:03:48.189281 212382 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:
stderr:
time="2022-07-28T21:03:48Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
I0728 21:03:48.133790 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:03:50.632719 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:03:52.633021 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:03:51.333960 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:03:53.335057 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:03:54.633129 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:03:57.133148 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:03:59.236557 212382 ssh_runner.go:195] Run: sudo crictl version
I0728 21:03:59.262470 212382 start.go:480] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.6.6
RuntimeApiVersion: v1alpha2
I0728 21:03:59.262532 212382 ssh_runner.go:195] Run: containerd --version
I0728 21:03:59.295607 212382 ssh_runner.go:195] Run: containerd --version
I0728 21:03:59.330442 212382 out.go:177] * Preparing Kubernetes v1.24.3 on containerd 1.6.6 ...
I0728 21:03:55.835459 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:03:58.334671 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:00.335213 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:03:59.331883 212382 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220728210213-9812 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0728 21:03:59.368865 212382 ssh_runner.go:195] Run: grep 192.168.94.1 host.minikube.internal$ /etc/hosts
I0728 21:03:59.372761 212382 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0728 21:03:59.384309 212382 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime containerd
I0728 21:03:59.384383 212382 ssh_runner.go:195] Run: sudo crictl images --output json
I0728 21:03:59.409692 212382 containerd.go:547] all images are preloaded for containerd runtime.
I0728 21:03:59.409717 212382 containerd.go:461] Images already preloaded, skipping extraction
I0728 21:03:59.409759 212382 ssh_runner.go:195] Run: sudo crictl images --output json
I0728 21:03:59.436212 212382 containerd.go:547] all images are preloaded for containerd runtime.
I0728 21:03:59.436239 212382 cache_images.go:84] Images are preloaded, skipping loading
I0728 21:03:59.436284 212382 ssh_runner.go:195] Run: sudo crictl info
I0728 21:03:59.462646 212382 cni.go:95] Creating CNI manager for ""
I0728 21:03:59.462670 212382 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0728 21:03:59.462683 212382 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0728 21:03:59.462696 212382 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220728210213-9812 NodeName:default-k8s-different-port-20220728210213-9812 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.94.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0728 21:03:59.462839 212382 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.94.2
bindPort: 8444
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "default-k8s-different-port-20220728210213-9812"
kubeletExtraArgs:
node-ip: 192.168.94.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8444
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0728 21:03:59.462995 212382 kubeadm.go:961] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220728210213-9812 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220728210213-9812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
I0728 21:03:59.463055 212382 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
I0728 21:03:59.471156 212382 binaries.go:44] Found k8s binaries, skipping transfer
I0728 21:03:59.471240 212382 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0728 21:03:59.479407 212382 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (539 bytes)
I0728 21:03:59.495697 212382 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0728 21:03:59.510436 212382 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
I0728 21:03:59.526781 212382 ssh_runner.go:195] Run: grep 192.168.94.2 control-plane.minikube.internal$ /etc/hosts
I0728 21:03:59.530245 212382 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0728 21:03:59.541280 212382 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812 for IP: 192.168.94.2
I0728 21:03:59.541427 212382 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
I0728 21:03:59.541480 212382 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
I0728 21:03:59.541575 212382 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/client.key
I0728 21:03:59.541651 212382 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/apiserver.key.ad8e880a
I0728 21:03:59.541754 212382 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/proxy-client.key
I0728 21:03:59.541911 212382 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/9812.pem (1338 bytes)
W0728 21:03:59.541952 212382 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/9812_empty.pem, impossibly tiny 0 bytes
I0728 21:03:59.541968 212382 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
I0728 21:03:59.542007 212382 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1078 bytes)
I0728 21:03:59.542043 212382 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
I0728 21:03:59.542078 212382 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
I0728 21:03:59.542137 212382 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98122.pem (1708 bytes)
I0728 21:03:59.542961 212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0728 21:03:59.563286 212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0728 21:03:59.583315 212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0728 21:03:59.602945 212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0728 21:03:59.623585 212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0728 21:03:59.643769 212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0728 21:03:59.662657 212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0728 21:03:59.682322 212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0728 21:03:59.702284 212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0728 21:03:59.723244 212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/9812.pem --> /usr/share/ca-certificates/9812.pem (1338 bytes)
I0728 21:03:59.743668 212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98122.pem --> /usr/share/ca-certificates/98122.pem (1708 bytes)
I0728 21:03:59.763724 212382 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0728 21:03:59.777895 212382 ssh_runner.go:195] Run: openssl version
I0728 21:03:59.783066 212382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0728 21:03:59.791136 212382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0728 21:03:59.794556 212382 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 20:27 /usr/share/ca-certificates/minikubeCA.pem
I0728 21:03:59.794611 212382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0728 21:03:59.799617 212382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0728 21:03:59.807298 212382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9812.pem && ln -fs /usr/share/ca-certificates/9812.pem /etc/ssl/certs/9812.pem"
I0728 21:03:59.815443 212382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9812.pem
I0728 21:03:59.818877 212382 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 20:32 /usr/share/ca-certificates/9812.pem
I0728 21:03:59.818957 212382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9812.pem
I0728 21:03:59.824232 212382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9812.pem /etc/ssl/certs/51391683.0"
I0728 21:03:59.832100 212382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98122.pem && ln -fs /usr/share/ca-certificates/98122.pem /etc/ssl/certs/98122.pem"
I0728 21:03:59.841348 212382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98122.pem
I0728 21:03:59.845050 212382 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 20:32 /usr/share/ca-certificates/98122.pem
I0728 21:03:59.845122 212382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98122.pem
I0728 21:03:59.851083 212382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98122.pem /etc/ssl/certs/3ec20f2e.0"
I0728 21:03:59.860690 212382 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220728210213-9812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220728210213-9812
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHos
tTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0728 21:03:59.860784 212382 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0728 21:03:59.860847 212382 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0728 21:03:59.893733 212382 cri.go:87] found id: "5cde08d4597bd6c238c52c9d92fe284f216bd62213d6e0c0af4da3d0c85b04b8"
I0728 21:03:59.893757 212382 cri.go:87] found id: "c0b088d5f22000d9be6084afdd994f2b30c3f360f17af279d970c242fa7ca717"
I0728 21:03:59.893764 212382 cri.go:87] found id: "206a4d565b5c989cba7278cf62eb682f4f7c9e443193ed341529d28c227d8233"
I0728 21:03:59.893770 212382 cri.go:87] found id: "e99d5885ff0d5731ae75d1c4f98676444333644b297265274c543149f5f405fe"
I0728 21:03:59.893776 212382 cri.go:87] found id: "03d739d3e791e9162146c113f46803dbdc91c5a9a8fc5dbfa6dcb7cdf445a582"
I0728 21:03:59.893783 212382 cri.go:87] found id: "62e6d6b65369040ed83b9bbd2d52614e4b713fab09a11fed7c8264519c265d76"
I0728 21:03:59.893792 212382 cri.go:87] found id: "91e011ac52b92ea19bb5ecc55d053d13f0b0cf10122b85840f8aa3dbb0d24117"
I0728 21:03:59.893802 212382 cri.go:87] found id: "3ca3bbfc03210c6938d97a31d885271b16ad54b16c7f1aa3c667c7b0c0bfd47a"
I0728 21:03:59.893815 212382 cri.go:87] found id: ""
I0728 21:03:59.893867 212382 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I0728 21:03:59.908077 212382 cri.go:114] JSON = null
W0728 21:03:59.908139 212382 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 8
I0728 21:03:59.908212 212382 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0728 21:03:59.916220 212382 kubeadm.go:410] found existing configuration files, will attempt cluster restart
I0728 21:03:59.916249 212382 kubeadm.go:626] restartCluster start
I0728 21:03:59.916348 212382 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0728 21:03:59.924167 212382 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0728 21:03:59.925126 212382 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220728210213-9812" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
I0728 21:03:59.925633 212382 kubeconfig.go:127] "default-k8s-different-port-20220728210213-9812" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig - will repair!
I0728 21:03:59.926260 212382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mka3434310bc9890bf6f7ac8ad0a69157716fb18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0728 21:03:59.927781 212382 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0728 21:03:59.935615 212382 api_server.go:165] Checking apiserver status ...
I0728 21:03:59.935671 212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 21:03:59.944781 212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 21:04:00.145205 212382 api_server.go:165] Checking apiserver status ...
I0728 21:04:00.145315 212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 21:04:00.154611 212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 21:04:00.345910 212382 api_server.go:165] Checking apiserver status ...
I0728 21:04:00.345982 212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 21:04:00.355810 212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 21:04:00.545029 212382 api_server.go:165] Checking apiserver status ...
I0728 21:04:00.545122 212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 21:04:00.554709 212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 21:04:00.744946 212382 api_server.go:165] Checking apiserver status ...
I0728 21:04:00.745044 212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 21:04:00.754534 212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 21:04:00.945823 212382 api_server.go:165] Checking apiserver status ...
I0728 21:04:00.945918 212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 21:04:00.955127 212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 21:04:01.145462 212382 api_server.go:165] Checking apiserver status ...
I0728 21:04:01.145566 212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 21:04:01.155545 212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 21:04:01.345902 212382 api_server.go:165] Checking apiserver status ...
I0728 21:04:01.346003 212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 21:04:01.357675 212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 21:04:01.544976 212382 api_server.go:165] Checking apiserver status ...
I0728 21:04:01.545080 212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 21:04:01.554846 212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 21:04:01.745110 212382 api_server.go:165] Checking apiserver status ...
I0728 21:04:01.745212 212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 21:04:01.754506 212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 21:04:01.945867 212382 api_server.go:165] Checking apiserver status ...
I0728 21:04:01.945969 212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 21:04:01.955667 212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 21:04:02.144884 212382 api_server.go:165] Checking apiserver status ...
I0728 21:04:02.144963 212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 21:04:02.154769 212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 21:04:02.344966 212382 api_server.go:165] Checking apiserver status ...
I0728 21:04:02.345058 212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 21:04:02.354691 212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 21:04:02.544990 212382 api_server.go:165] Checking apiserver status ...
I0728 21:04:02.545071 212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 21:04:02.555012 212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 21:03:59.133870 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:01.633001 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:02.835388 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:05.336265 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:02.745422 212382 api_server.go:165] Checking apiserver status ...
I0728 21:04:02.745501 212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 21:04:02.755276 212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 21:04:02.945512 212382 api_server.go:165] Checking apiserver status ...
I0728 21:04:02.945600 212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 21:04:02.955029 212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 21:04:02.955061 212382 api_server.go:165] Checking apiserver status ...
I0728 21:04:02.955108 212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 21:04:02.964093 212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 21:04:02.964122 212382 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
I0728 21:04:02.964130 212382 kubeadm.go:1092] stopping kube-system containers ...
I0728 21:04:02.964145 212382 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0728 21:04:02.964204 212382 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0728 21:04:02.990958 212382 cri.go:87] found id: "5cde08d4597bd6c238c52c9d92fe284f216bd62213d6e0c0af4da3d0c85b04b8"
I0728 21:04:02.990990 212382 cri.go:87] found id: "c0b088d5f22000d9be6084afdd994f2b30c3f360f17af279d970c242fa7ca717"
I0728 21:04:02.991001 212382 cri.go:87] found id: "206a4d565b5c989cba7278cf62eb682f4f7c9e443193ed341529d28c227d8233"
I0728 21:04:02.991010 212382 cri.go:87] found id: "e99d5885ff0d5731ae75d1c4f98676444333644b297265274c543149f5f405fe"
I0728 21:04:02.991017 212382 cri.go:87] found id: "03d739d3e791e9162146c113f46803dbdc91c5a9a8fc5dbfa6dcb7cdf445a582"
I0728 21:04:02.991027 212382 cri.go:87] found id: "62e6d6b65369040ed83b9bbd2d52614e4b713fab09a11fed7c8264519c265d76"
I0728 21:04:02.991037 212382 cri.go:87] found id: "91e011ac52b92ea19bb5ecc55d053d13f0b0cf10122b85840f8aa3dbb0d24117"
I0728 21:04:02.991053 212382 cri.go:87] found id: "3ca3bbfc03210c6938d97a31d885271b16ad54b16c7f1aa3c667c7b0c0bfd47a"
I0728 21:04:02.991068 212382 cri.go:87] found id: ""
I0728 21:04:02.991077 212382 cri.go:232] Stopping containers: [5cde08d4597bd6c238c52c9d92fe284f216bd62213d6e0c0af4da3d0c85b04b8 c0b088d5f22000d9be6084afdd994f2b30c3f360f17af279d970c242fa7ca717 206a4d565b5c989cba7278cf62eb682f4f7c9e443193ed341529d28c227d8233 e99d5885ff0d5731ae75d1c4f98676444333644b297265274c543149f5f405fe 03d739d3e791e9162146c113f46803dbdc91c5a9a8fc5dbfa6dcb7cdf445a582 62e6d6b65369040ed83b9bbd2d52614e4b713fab09a11fed7c8264519c265d76 91e011ac52b92ea19bb5ecc55d053d13f0b0cf10122b85840f8aa3dbb0d24117 3ca3bbfc03210c6938d97a31d885271b16ad54b16c7f1aa3c667c7b0c0bfd47a]
I0728 21:04:02.991130 212382 ssh_runner.go:195] Run: which crictl
I0728 21:04:02.994686 212382 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 5cde08d4597bd6c238c52c9d92fe284f216bd62213d6e0c0af4da3d0c85b04b8 c0b088d5f22000d9be6084afdd994f2b30c3f360f17af279d970c242fa7ca717 206a4d565b5c989cba7278cf62eb682f4f7c9e443193ed341529d28c227d8233 e99d5885ff0d5731ae75d1c4f98676444333644b297265274c543149f5f405fe 03d739d3e791e9162146c113f46803dbdc91c5a9a8fc5dbfa6dcb7cdf445a582 62e6d6b65369040ed83b9bbd2d52614e4b713fab09a11fed7c8264519c265d76 91e011ac52b92ea19bb5ecc55d053d13f0b0cf10122b85840f8aa3dbb0d24117 3ca3bbfc03210c6938d97a31d885271b16ad54b16c7f1aa3c667c7b0c0bfd47a
I0728 21:04:03.023593 212382 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0728 21:04:03.035211 212382 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0728 21:04:03.044471 212382 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5639 Jul 28 21:02 /etc/kubernetes/admin.conf
-rw------- 1 root root 5652 Jul 28 21:02 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2123 Jul 28 21:02 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5600 Jul 28 21:02 /etc/kubernetes/scheduler.conf
I0728 21:04:03.044536 212382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
I0728 21:04:03.053096 212382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
I0728 21:04:03.061232 212382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
I0728 21:04:03.069361 212382 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0728 21:04:03.069428 212382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0728 21:04:03.077902 212382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
I0728 21:04:03.086062 212382 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0728 21:04:03.086130 212382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0728 21:04:03.094051 212382 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0728 21:04:03.103889 212382 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0728 21:04:03.103923 212382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0728 21:04:03.155810 212382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0728 21:04:03.972596 212382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0728 21:04:04.179527 212382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0728 21:04:04.237445 212382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0728 21:04:04.333884 212382 api_server.go:51] waiting for apiserver process to appear ...
I0728 21:04:04.333979 212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 21:04:04.845249 212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 21:04:05.345264 212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 21:04:05.417911 212382 api_server.go:71] duration metric: took 1.08403264s to wait for apiserver process to appear ...
I0728 21:04:05.417947 212382 api_server.go:87] waiting for apiserver healthz status ...
I0728 21:04:05.417962 212382 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
I0728 21:04:05.418356 212382 api_server.go:256] stopped: https://192.168.94.2:8444/healthz: Get "https://192.168.94.2:8444/healthz": dial tcp 192.168.94.2:8444: connect: connection refused
I0728 21:04:05.919077 212382 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
I0728 21:04:03.633786 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:06.133738 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:07.835295 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:10.335437 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:09.109537 212382 api_server.go:266] https://192.168.94.2:8444/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0728 21:04:09.109636 212382 api_server.go:102] status: https://192.168.94.2:8444/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0728 21:04:09.418796 212382 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
I0728 21:04:09.423855 212382 api_server.go:266] https://192.168.94.2:8444/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0728 21:04:09.423885 212382 api_server.go:102] status: https://192.168.94.2:8444/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0728 21:04:09.919474 212382 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
I0728 21:04:09.924519 212382 api_server.go:266] https://192.168.94.2:8444/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0728 21:04:09.924550 212382 api_server.go:102] status: https://192.168.94.2:8444/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0728 21:04:10.419248 212382 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
I0728 21:04:10.424297 212382 api_server.go:266] https://192.168.94.2:8444/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0728 21:04:10.424331 212382 api_server.go:102] status: https://192.168.94.2:8444/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0728 21:04:10.918701 212382 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
I0728 21:04:10.925376 212382 api_server.go:266] https://192.168.94.2:8444/healthz returned 200:
ok
I0728 21:04:10.932286 212382 api_server.go:140] control plane version: v1.24.3
I0728 21:04:10.932381 212382 api_server.go:130] duration metric: took 5.514424407s to wait for apiserver health ...
I0728 21:04:10.932402 212382 cni.go:95] Creating CNI manager for ""
I0728 21:04:10.932418 212382 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0728 21:04:10.935481 212382 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0728 21:04:10.937107 212382 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0728 21:04:10.942351 212382 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.3/kubectl ...
I0728 21:04:10.942379 212382 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0728 21:04:11.012352 212382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0728 21:04:11.963788 212382 system_pods.go:43] waiting for kube-system pods to appear ...
I0728 21:04:11.972163 212382 system_pods.go:59] 9 kube-system pods found
I0728 21:04:11.972209 212382 system_pods.go:61] "coredns-6d4b75cb6d-s8wj4" [ec3bccb1-bc2b-4c57-94a3-5f2b3df05042] Running
I0728 21:04:11.972220 212382 system_pods.go:61] "etcd-default-k8s-different-port-20220728210213-9812" [6b7f86b4-ada8-4e59-a512-07aa98ecb6d5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0728 21:04:11.972226 212382 system_pods.go:61] "kindnet-v8mqh" [f4ebd13b-5cb6-4732-86d0-be50c8984a97] Running
I0728 21:04:11.972234 212382 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220728210213-9812" [260bad63-2de0-4fff-8f0b-4cf777a54bed] Running
I0728 21:04:11.972238 212382 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220728210213-9812" [a6b5c20e-01b7-40ab-a24f-00690c952fe0] Running
I0728 21:04:11.972245 212382 system_pods.go:61] "kube-proxy-xcmjh" [c76d38e6-b689-4683-9251-0269a4b0c141] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0728 21:04:11.972251 212382 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220728210213-9812" [a9f49a47-4c60-4941-99aa-7cb61a2e8c32] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0728 21:04:11.972262 212382 system_pods.go:61] "metrics-server-5c6f97fb75-rtkxz" [3c871ef2-daac-4441-be4c-395a0ab5fe0a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0728 21:04:11.972267 212382 system_pods.go:61] "storage-provisioner" [7f030d68-c433-4860-a853-4154e80e108d] Running
I0728 21:04:11.972273 212382 system_pods.go:74] duration metric: took 8.462212ms to wait for pod list to return data ...
I0728 21:04:11.972279 212382 node_conditions.go:102] verifying NodePressure condition ...
I0728 21:04:11.975284 212382 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0728 21:04:11.975331 212382 node_conditions.go:123] node cpu capacity is 8
I0728 21:04:11.975343 212382 node_conditions.go:105] duration metric: took 3.059635ms to run NodePressure ...
I0728 21:04:11.975362 212382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0728 21:04:12.143066 212382 kubeadm.go:762] waiting for restarted kubelet to initialise ...
I0728 21:04:12.147796 212382 kubeadm.go:777] kubelet initialised
I0728 21:04:12.147822 212382 kubeadm.go:778] duration metric: took 4.727964ms waiting for restarted kubelet to initialise ...
I0728 21:04:12.147830 212382 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0728 21:04:12.153548 212382 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-s8wj4" in "kube-system" namespace to be "Ready" ...
I0728 21:04:12.159222 212382 pod_ready.go:92] pod "coredns-6d4b75cb6d-s8wj4" in "kube-system" namespace has status "Ready":"True"
I0728 21:04:12.159245 212382 pod_ready.go:81] duration metric: took 5.664265ms waiting for pod "coredns-6d4b75cb6d-s8wj4" in "kube-system" namespace to be "Ready" ...
I0728 21:04:12.159255 212382 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace to be "Ready" ...
I0728 21:04:12.721082 160802 out.go:204] - Generating certificates and keys ...
I0728 21:04:12.724181 160802 out.go:204] - Booting up control plane ...
W0728 21:04:12.726717 160802 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1013-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0728 21:02:16.240575 7625 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0728 21:04:12.726772 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0728 21:04:08.632838 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:10.633198 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:12.633913 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:12.835721 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:15.334521 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:14.174204 212382 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:16.670732 212382 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:13.465967 160802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0728 21:04:13.478145 160802 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0728 21:04:13.478204 160802 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0728 21:04:13.486471 160802 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0728 21:04:13.486522 160802 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0728 21:04:15.133224 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:17.133940 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:17.835303 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:20.335468 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:18.671985 212382 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:21.172229 212382 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:19.632821 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:21.633402 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:22.834889 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:24.835428 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:22.671937 212382 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace has status "Ready":"True"
I0728 21:04:22.671972 212382 pod_ready.go:81] duration metric: took 10.512710863s waiting for pod "etcd-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace to be "Ready" ...
I0728 21:04:22.671991 212382 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace to be "Ready" ...
I0728 21:04:22.677801 212382 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace has status "Ready":"True"
I0728 21:04:22.677825 212382 pod_ready.go:81] duration metric: took 5.825596ms waiting for pod "kube-apiserver-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace to be "Ready" ...
I0728 21:04:22.677839 212382 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace to be "Ready" ...
I0728 21:04:22.683193 212382 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace has status "Ready":"True"
I0728 21:04:22.683215 212382 pod_ready.go:81] duration metric: took 5.367248ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace to be "Ready" ...
I0728 21:04:22.683228 212382 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xcmjh" in "kube-system" namespace to be "Ready" ...
I0728 21:04:22.688178 212382 pod_ready.go:92] pod "kube-proxy-xcmjh" in "kube-system" namespace has status "Ready":"True"
I0728 21:04:22.688203 212382 pod_ready.go:81] duration metric: took 4.967046ms waiting for pod "kube-proxy-xcmjh" in "kube-system" namespace to be "Ready" ...
I0728 21:04:22.688216 212382 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace to be "Ready" ...
I0728 21:04:22.693084 212382 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace has status "Ready":"True"
I0728 21:04:22.693107 212382 pod_ready.go:81] duration metric: took 4.882799ms waiting for pod "kube-scheduler-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace to be "Ready" ...
I0728 21:04:22.693116 212382 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace to be "Ready" ...
I0728 21:04:25.076038 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:27.575272 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:23.634073 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:26.132931 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:27.335220 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:29.834958 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:29.576685 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:32.076147 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:28.133764 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:30.633437 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:32.633603 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:31.835176 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:34.335022 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:34.576676 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:36.578090 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:34.634027 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:37.133763 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:36.335544 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:38.834301 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:39.076244 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:41.576639 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:39.633422 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:41.634382 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:40.834383 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:42.835687 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:45.335401 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:44.076212 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:46.076524 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:44.133217 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:46.133404 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:47.335460 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:49.335660 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:48.576528 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:51.076826 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:48.633415 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:50.634704 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:52.635331 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:51.835608 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:53.836298 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:53.575380 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:55.577372 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:55.135428 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:57.633790 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:56.335486 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:58.834779 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:04:58.075920 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:00.075971 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:02.576454 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:00.132985 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:02.133461 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:00.835767 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:03.336195 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:05.076599 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:07.575882 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:04.133808 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:06.634439 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:05.834632 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:07.835353 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:10.334649 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:10.076555 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:12.576534 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:09.133676 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:11.633586 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:12.335554 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:14.335718 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:14.578248 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:17.077350 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:14.133688 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:16.134228 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:16.835883 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:19.335699 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:19.576840 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:22.076850 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:18.634539 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:21.133578 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:21.835113 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:23.835453 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:24.576362 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:26.576614 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:23.633974 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:26.133150 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:25.835615 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:28.335674 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:29.076851 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:31.576808 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:28.134079 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:30.634432 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:32.635308 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:30.835639 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:33.336176 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:34.076741 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:36.077895 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:34.635552 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:37.133897 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:35.835747 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:37.835886 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:40.335828 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:38.576456 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:40.577025 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:39.634015 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:41.635057 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:42.835055 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:44.835444 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:43.076088 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:45.077292 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:47.577201 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:44.133805 197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:44.627002 197178 pod_ready.go:81] duration metric: took 4m0.006468083s waiting for pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace to be "Ready" ...
E0728 21:05:44.627036 197178 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace to be "Ready" (will not retry!)
I0728 21:05:44.627059 197178 pod_ready.go:38] duration metric: took 4m11.065068408s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0728 21:05:44.627089 197178 kubeadm.go:630] restartCluster took 4m24.29522771s
W0728 21:05:44.627252 197178 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
I0728 21:05:44.627293 197178 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0728 21:05:47.726265 197178 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.098952316s)
I0728 21:05:47.726333 197178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0728 21:05:47.738538 197178 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0728 21:05:47.747960 197178 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0728 21:05:47.748024 197178 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0728 21:05:47.756857 197178 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0728 21:05:47.756902 197178 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0728 21:05:48.059979 197178 out.go:204] - Generating certificates and keys ...
I0728 21:05:47.335669 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:49.335864 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:50.076800 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:52.576725 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:48.943549 197178 out.go:204] - Booting up control plane ...
I0728 21:05:51.340520 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:53.835350 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:56.994142 197178 out.go:204] - Configuring RBAC rules ...
I0728 21:05:57.420708 197178 cni.go:95] Creating CNI manager for ""
I0728 21:05:57.420742 197178 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0728 21:05:57.424463 197178 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0728 21:05:55.076387 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:57.077553 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:57.428050 197178 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0728 21:05:57.433231 197178 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.3/kubectl ...
I0728 21:05:57.433266 197178 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0728 21:05:57.519769 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0728 21:05:55.835652 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:57.836076 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:06:00.334992 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:59.575469 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:06:01.576314 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:05:58.436250 197178 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0728 21:05:58.436316 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:05:58.436352 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551 minikube.k8s.io/name=no-preload-20220728205940-9812 minikube.k8s.io/updated_at=2022_07_28T21_05_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:05:58.537136 197178 ops.go:34] apiserver oom_adj: -16
I0728 21:05:58.537135 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:05:59.124095 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:05:59.624477 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:06:00.124491 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:06:00.624113 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:06:01.124598 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:06:01.624046 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:06:02.124883 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:06:02.624466 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:06:02.335157 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:06:04.335918 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:06:03.578169 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:06:06.076274 212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
I0728 21:06:03.124816 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:06:03.624001 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:06:04.123989 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:06:04.623949 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:06:05.124722 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:06:05.624086 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:06:06.124211 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:06:06.624712 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:06:07.124783 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:06:07.624057 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:06:09.456081 160802 out.go:204] - Generating certificates and keys ...
I0728 21:06:09.460704 160802 out.go:204] - Booting up control plane ...
I0728 21:06:09.463187 160802 kubeadm.go:397] StartCluster complete in 7m56.100147537s
I0728 21:06:09.463242 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0728 21:06:09.463303 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0728 21:06:09.492218 160802 cri.go:87] found id: ""
I0728 21:06:09.492244 160802 logs.go:274] 0 containers: []
W0728 21:06:09.492250 160802 logs.go:276] No container was found matching "kube-apiserver"
I0728 21:06:09.492257 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0728 21:06:09.492327 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0728 21:06:09.519747 160802 cri.go:87] found id: ""
I0728 21:06:09.519773 160802 logs.go:274] 0 containers: []
W0728 21:06:09.519779 160802 logs.go:276] No container was found matching "etcd"
I0728 21:06:09.519786 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0728 21:06:09.519843 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0728 21:06:09.546296 160802 cri.go:87] found id: ""
I0728 21:06:09.546331 160802 logs.go:274] 0 containers: []
W0728 21:06:09.546340 160802 logs.go:276] No container was found matching "coredns"
I0728 21:06:09.546348 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0728 21:06:09.546505 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0728 21:06:09.574600 160802 cri.go:87] found id: ""
I0728 21:06:09.574627 160802 logs.go:274] 0 containers: []
W0728 21:06:09.574634 160802 logs.go:276] No container was found matching "kube-scheduler"
I0728 21:06:09.574640 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0728 21:06:09.574701 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0728 21:06:09.604664 160802 cri.go:87] found id: ""
I0728 21:06:09.604694 160802 logs.go:274] 0 containers: []
W0728 21:06:09.604700 160802 logs.go:276] No container was found matching "kube-proxy"
I0728 21:06:09.604708 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0728 21:06:09.604798 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0728 21:06:09.634288 160802 cri.go:87] found id: ""
I0728 21:06:09.634320 160802 logs.go:274] 0 containers: []
W0728 21:06:09.634329 160802 logs.go:276] No container was found matching "kubernetes-dashboard"
I0728 21:06:09.634339 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0728 21:06:09.634400 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0728 21:06:09.666085 160802 cri.go:87] found id: ""
I0728 21:06:09.666116 160802 logs.go:274] 0 containers: []
W0728 21:06:09.666123 160802 logs.go:276] No container was found matching "storage-provisioner"
I0728 21:06:09.666130 160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0728 21:06:09.666186 160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0728 21:06:09.697616 160802 cri.go:87] found id: ""
I0728 21:06:09.697646 160802 logs.go:274] 0 containers: []
W0728 21:06:09.697656 160802 logs.go:276] No container was found matching "kube-controller-manager"
I0728 21:06:09.697671 160802 logs.go:123] Gathering logs for dmesg ...
I0728 21:06:09.697688 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0728 21:06:09.715231 160802 logs.go:123] Gathering logs for describe nodes ...
I0728 21:06:09.715278 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0728 21:06:09.774303 160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0728 21:06:09.774333 160802 logs.go:123] Gathering logs for containerd ...
I0728 21:06:09.774345 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0728 21:06:09.822586 160802 logs.go:123] Gathering logs for container status ...
I0728 21:06:09.822641 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0728 21:06:09.856873 160802 logs.go:123] Gathering logs for kubelet ...
I0728 21:06:09.856900 160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0728 21:06:09.905506 160802 logs.go:138] Found kubelet problem: Jul 28 21:06:09 kubernetes-upgrade-20220728205630-9812 kubelet[11608]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
W0728 21:06:09.966501 160802 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1013-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0728 21:04:13.523562 9731 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0728 21:06:09.966572 160802 out.go:239] *
W0728 21:06:09.966810 160802 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1013-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0728 21:04:13.523562 9731 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0728 21:06:09.966848 160802 out.go:239] *
W0728 21:06:09.967728 160802 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0728 21:06:09.971858 160802 out.go:177] X Problems detected in kubelet:
I0728 21:06:09.973883 160802 out.go:177] Jul 28 21:06:09 kubernetes-upgrade-20220728205630-9812 kubelet[11608]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0728 21:06:09.978588 160802 out.go:177]
W0728 21:06:09.981692 160802 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1013-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0728 21:04:13.523562 9731 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0728 21:06:09.981884 160802 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0728 21:06:09.981956 160802 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I0728 21:06:09.986376 160802 out.go:177]
I0728 21:06:06.335943 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:06:08.835019 202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
I0728 21:06:08.124523 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:06:08.624455 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:06:09.123924 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:06:09.624214 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:06:10.123963 197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0728 21:06:10.345514 197178 kubeadm.go:1045] duration metric: took 11.909246023s to wait for elevateKubeSystemPrivileges.
I0728 21:06:10.345554 197178 kubeadm.go:397] StartCluster complete in 4m50.062505382s
I0728 21:06:10.345577 197178 settings.go:142] acquiring lock: {Name:mkde2c38eaf8dba18ec4a329effa3f2c12221de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0728 21:06:10.345717 197178 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
I0728 21:06:10.347802 197178 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mka3434310bc9890bf6f7ac8ad0a69157716fb18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0728 21:06:10.935393 197178 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220728205940-9812" rescaled to 1
I0728 21:06:10.935467 197178 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0728 21:06:10.938504 197178 out.go:177] * Verifying Kubernetes components...
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
*
* ==> containerd <==
* -- Logs begin at Thu 2022-07-28 20:57:24 UTC, end at Thu 2022-07-28 21:06:11 UTC. --
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.237062793Z" level=error msg="StopPodSandbox for \"\\\"Using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"\\\"Using\": not found"
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.254901788Z" level=info msg="StopPodSandbox for \"this\""
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.254970195Z" level=error msg="StopPodSandbox for \"this\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"this\": not found"
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.272824654Z" level=info msg="StopPodSandbox for \"endpoint\""
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.272898418Z" level=error msg="StopPodSandbox for \"endpoint\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint\": not found"
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.292644300Z" level=info msg="StopPodSandbox for \"is\""
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.292716294Z" level=error msg="StopPodSandbox for \"is\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"is\": not found"
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.310517048Z" level=info msg="StopPodSandbox for \"deprecated,\""
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.310591757Z" level=error msg="StopPodSandbox for \"deprecated,\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"deprecated,\": not found"
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.330972400Z" level=info msg="StopPodSandbox for \"please\""
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.331035339Z" level=error msg="StopPodSandbox for \"please\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"please\": not found"
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.350318303Z" level=info msg="StopPodSandbox for \"consider\""
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.350389754Z" level=error msg="StopPodSandbox for \"consider\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"consider\": not found"
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.367442642Z" level=info msg="StopPodSandbox for \"using\""
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.367508036Z" level=error msg="StopPodSandbox for \"using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"using\": not found"
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.386621034Z" level=info msg="StopPodSandbox for \"full\""
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.386675447Z" level=error msg="StopPodSandbox for \"full\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"full\": not found"
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.404421508Z" level=info msg="StopPodSandbox for \"URL\""
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.404476203Z" level=error msg="StopPodSandbox for \"URL\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL\": not found"
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.422169228Z" level=info msg="StopPodSandbox for \"format\\\"\""
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.422226506Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.440344675Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.440415912Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.458393725Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.458971337Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
*
* ==> describe nodes <==
*
* ==> dmesg <==
* [ +0.000002] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
[ +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
[ +0.000001] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
[ +1.009641] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
[ +0.000007] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
[ +0.003983] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
[ +0.000006] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
[ +0.000049] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
[ +0.000005] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
[ +2.011738] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
[ +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
[ +0.000006] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
[ +0.000001] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
[ +4.223600] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
[ +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
[ +0.000008] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
[ +0.000048] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
[ +0.003951] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
[ +0.000007] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
[ +8.187203] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
[ +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
[ +0.000005] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
[ +0.000001] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
[ +0.000023] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
[ +0.000003] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
*
* ==> kernel <==
* 21:06:11 up 48 min, 0 users, load average: 1.00, 2.37, 2.31
Linux kubernetes-upgrade-20220728205630-9812 5.15.0-1013-gcp #18~20.04.1-Ubuntu SMP Sun Jul 3 08:20:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.4 LTS"
*
* ==> kubelet <==
* -- Logs begin at Thu 2022-07-28 20:57:24 UTC, end at Thu 2022-07-28 21:06:11 UTC. --
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: --storage-driver-buffer-duration duration Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction (default 1m0s) (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: --storage-driver-db string database name (default "cadvisor") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: --storage-driver-host string database host:port (default "localhost:8086") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: --storage-driver-password string database password (default "root") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: --storage-driver-secure use secure connection with database (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: --storage-driver-table string table name (default "stats") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: --storage-driver-user string database username (default "root") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: --streaming-connection-idle-timeout duration Maximum time a streaming connection can be idle before the connection is automatically closed. 0 indicates no timeout. Example: '5m'. Note: All connections to the kubelet server have a maximum duration of 4 hours. (default 4h0m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: --sync-frequency duration Max period between synchronizing running containers and config (default 1m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: --system-cgroups string Optional absolute name of cgroups in which to place all non-kernel processes that are not already inside a cgroup under '/'. Empty for no container. Rolling back the flag requires a reboot. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: --system-reserved mapStringString A set of ResourceName=ResourceQuantity (e.g. cpu=200m,memory=500Mi,ephemeral-storage=1Gi) pairs that describe resources reserved for non-kubernetes components. Currently only cpu and memory are supported. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ for more detail. [default=none] (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: --system-reserved-cgroup string Absolute name of the top level cgroup that is used to manage non-kubernetes components for which compute resources were reserved via '--system-reserved' flag. Ex. '/system-reserved'. [default=''] (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: --tls-cert-file string File containing x509 Certificate used for serving HTTPS (with intermediate certs, if any, concatenated after server cert). If --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory passed to --cert-dir. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13 (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: --tls-private-key-file string File containing x509 private key matching --tls-cert-file. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: --topology-manager-policy string Topology Manager policy to use. Possible values: 'none', 'best-effort', 'restricted', 'single-numa-node'. (default "none") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: --topology-manager-scope string Scope to which topology hints applied. Topology Manager collects hints from Hint Providers and applies them to defined scope to ensure the pod admission. Possible values: 'container', 'pod'. (default "container") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: -v, --v Level number for the log level verbosity
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: --version version[=true] Print version information and quit
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: --vmodule pattern=N,... comma-separated list of pattern=N settings for file-filtered logging (only works for text log format)
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: --volume-plugin-dir string The full path of the directory in which to search for additional third party volume plugins (default "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]: --volume-stats-agg-period duration Specifies interval for kubelet to calculate and cache the volume disk usage for all pods and volumes. To disable volume calculations, set to a negative number. (default 1m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
-- /stdout --
** stderr **
E0728 21:06:11.772557 220873 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
! unable to fetch logs for: describe nodes
** /stderr **
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220728205630-9812 -n kubernetes-upgrade-20220728205630-9812
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220728205630-9812 -n kubernetes-upgrade-20220728205630-9812: exit status 2 (494.09816ms)
-- stdout --
Stopped
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-20220728205630-9812" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220728205630-9812" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220728205630-9812
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220728205630-9812: (2.486251428s)
--- FAIL: TestKubernetesUpgrade (584.47s)